Not My AI: A feminist framework to challenge algorithmic decision-making systems deployed by the public sector

Author: 
Coding Rights

Throughout the Latin America region, governments are in the stage of testing and piloting a wide variety of artificial intelligence (AI) systems to deploy public services. But what are the feminist and human rights implications?

As machines are designed and operated by the very same humans in power, these artificial intelligence systems are likely to cause or propagate harm and discrimination based on gender and all its intersectionalities of race, class, sexuality, age and territoriality, therefore posing worrisome trends that should be of concern to feminist movements.

Taking Latin America as a point of departure, this research seeks to contribute to the development of an anti-colonial feminist framework to question AI systems that are being deployed by the public sector, particularly focused on social welfare programmes. The researchers' ultimate goal is to develop arguments that enable the building of bridges for advocacy among different human rights groups, particularly feminists and LGBTIQ+ groups, especially in Latin America, but also beyond. They started by posing three research questions:

  • What are the leading causes of governments implementing AI and other methods of algorithmic decision-making processes in Latin America to address issues of public services?

  • What are the critical implications of such technologies in the enforcement of gender equality, cultural diversity, and sexual and reproductive rights?

  • How can we learn from feminist theories to provide guidelines to balance the power dynamic enforced by the usages of AI and another algorithmic decision-making systems?

This research was supported by APC as part of the Feminist Internet Research Network (FIRN), which is funded by the International Development Research Centre (IDRC).

Read the full report here.
« Go back