Are there laws that protect women from AI-generated abuse?

AI-generated abuse



Less than half of countries have laws that prosecute online abuse and where they do exist, enforcement is weak.

Additionally, there is limited reporting and access to justice, and tech platforms lack accountability. The transnational nature of AI-generated digital abuse further drives impunity.

UN Women’s call to action for the 16 Days of Activism includes the need for laws and their enforcement to ensure perpetrators’ accountability, together with better support for victims and survivors and digital literacy for women and girls. 

Laws are beginning to adapt to emerging trends, although they're struggling to keep pace with rapid developments in generative AISome examples include: 

  • The UK Online Safety Act (passed in 2023) made it illegal to share explicit images or videos that have been digitally manipulated. However, the Act does not prevent the creation of pornographic deepfakes or sharing them where intent to cause distress cannot be proved. 
  • The EU's AI Act (2024) promotes transparency by requiring the creators of deepfakes to inform the public about the artificial nature of their work and providers of general-purpose AI tools to tag AI-generated content. 
  • In Mexico, Ley Olimpia recognises and punishes digital violence, and has inspired similar legislation in other countries in the region – Argentina, Panama and Uruguay are expected to follow. 
  • Australian legislation is being introduced to strengthen laws targeting the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI and deepfakes. 
  • One recommended approach is global cooperation and sector-wide regulation mandating that AI tools have to meet a safety and ethics standard before being rolled out to the public. The Council of Europe's framework convention on artificial intelligence offers a model. The recently established UN High Level Advisory Body on AI represented by the UN Global Digital Compact is another example of such coordinated efforts.

Paola Gálvez-Callirgos, expert in AI and digital technology policy and governance, cautions: “There isn’t a one-size-fits-all model for AI governance. Policymakers must consider that national context and culture matter.”

However, she believes that there are some basic measures that all countries can take to criminalize all forms of technology-facilitated violence against women and invest in building institutional capacities so that enforcement is possible.

Another loophole she recommends plugging through legislation is by mandating content provenance – meaning, the ability to trace the history of digital assets. “The producers of synthetic media tools must attach verifiable content credentials (in-file metadata or robust watermark/provenance per C2PA-style standards) that allow platforms and investigators to detect origin and manipulation”, she explains. “This will support automated filtering and make it harder for perpetrators to plausibly deny origin.” 

Gálvez-Callirgos is part of UN Women’s AI School – a free, invitation-based course currently offered to women’s rights organizations under the ACT to end violence against women programme to learn how to use AI tools ethically for advocacy, influence AI policy development and to leverage AI tools responsibly to prevent and respond to violence against women. The course also includes selected expert talks and innovation labs open to the public.




—Paola Gálvez-Callirgos


Comments

Popular posts from this blog

What can you do to counter digital violence?

UNiTE to End Digital Violence against All Women and Girls.

16 Days of Activism on Ending Violence Against Women and Girls.