Pasar al contenido principal

Artificial intelligence (AI) technologies are rapidly transforming digital ecosystems, creating new opportunities for innovation while also reinforcing existing structural inequalities – particularly those rooted in gender. Among the most urgent and under-regulated consequences of AI deployment is the rise of technology-facilitated gender-based violence (TFGBV), including automated harassment, deepfakes, gendered disinformation and surveillance-enabled abuse.

This research provides a comprehensive analysis of the intersection between AI governance and TFGBV. It examines how international, regional and national policy frameworks engage with gendered risks introduced or exacerbated by AI and identifies critical gaps in regulation, protection and enforcement.

The study draws on four primary methods: a literature review, a comparative analysis of 24 national and seven international AI governance frameworks, a review of global, regional and national TFGBV legal frameworks and norms and a validation workshop with experts from civil society, academia and policy fields. A taxonomy of AI-TFGBV risks was developed and used to assess both the content and scope of relevant frameworks.

Key findings include:

  • Fragmented and siloed governance: Most AI frameworks neglect TFGBV, while TFGBV norms fail to incorporate AI-related risks. This leaves victims without protection or recourse and undermines policy coherence.
    Lack of enforceability: References to fairness or inclusion often lack implementation tools, such as audits, indicators or gender-responsive safeguards.
  • Underrepresentation and digital exclusion: Women and gender-diverse people – especially from the Global Majority – remain marginalised in AI development, access and governance.
  • Emerging national innovation: Countries such as Chile, Canada, Australia and the UK offer promising models that integrate gender-based analysis, human rights impact assessments or binding regulation of deepfakes and platform accountability.

Recommendations are structured across three pillars:

  1. AI-specific actions, such as mandating gender-responsive impact assessments and integrating TFGBV as a risk category in AI governance.
  2. TFGBV-specific actions, including updating legal definitions of violence to capture AI-enabled harms and strengthening redress mechanisms.
  3. Crosscutting actions, including building regulatory capacity, funding feminist innovation, ensuring intersectional data practices and enforcing corporate responsibility throughout the AI lifecycle.

 

Read the full report here