Skip to main content
Download

Where artificial intelligence (AI) is transforming societies, economies and politics alongside reshaping the ways in which culture is created, promoted and experienced, it is also a reminder of how technology intersects with various rights that people access and exercise. Cultural rights, for example, are inseparable from civil, political, economic and social rights, and their suppression undermines the broader right to development. From the stories we tell, the languages we preserve, the music we make and the communities we visibilise, technology in general, and AI systems in particular, are becoming promoters as well as gatekeepers of culture and identity.

We approach this debate from a human rights-based, feminist and intersectional perspective, recognising that societies and culture are never neutral. Women, LGBTQIA+ people, Indigenous communities and other marginalised groups have long been excluded from dominant cultural narratives. As a result, technology, including AI, risks automating these exclusions at scale. Inclusive cultural production, whether in stories, languages, artistic practices or collective knowledge systems, is both a right and a resource for equitable development. Yet the datasets on which AI is trained often steal, erase or appropriate this work, reproducing hierarchies and power dynamics that cause structural and societal barriers in accessing rights rather than dismantling them.

For decades, technology has been framed as a solution to social problems and a bridge between human rights, justice and communities. While digital tools have expanded access and amplified voices, they have also generated new forms of exclusion and harm for marginalised and underrepresented communities. As Morgan G. Ames observes in Charismatic Technology, each new innovation is promoted as a transformative breakthrough, creating what she terms a “charismatic” hold on societies.  Still, these technologies are frequently developed without regard for the cultural contexts in which they operate, and with little capacity to deliver meaningful or lasting change.

Considering AI as the charismatic technology of today, Stanford researchers Xiao Ge and Chunchen Xu say in their research study, How Culture Shapes What People Want from AI, “There is a gold rush underway to optimize every urban function, from education to healthcare to banking, but there’s a serious lack of reflection and understanding of how culture shapes these conceptions.”

The rise of AI is not only transforming how societies function but also redefining the ways in which cultural life is expressed and governed. UNESCO’s Recommendation on the Ethics of Artificial Intelligence emphasises that culture and creativity are central to human dignity, and that technological innovation must advance diversity rather than homogenise it.  But in practice, AI has been largely developed and deployed within economic and cultural frameworks shaped by a few powerful actors, mostly in the Global North. This concentration of technological power raises urgent and critical questions about cultural sovereignty, participation, and the collective right to development.

In many ways, the cultural implications of AI reflect a longstanding structural imbalance where communities who are the least represented in data systems are often the most impacted by their decisions. When datasets overwhelmingly reflect Western epistemologies, AI models inevitably reproduce them, privileging dominant languages, aesthetics, art and forms of knowledge. As the UN Special Rapporteur in the field of cultural rights, Alexandra Xanthaki, has noted, these “tools are not neutral,” and marginalise local and Indigenous forms of expression.  This concern manifests daily through digital infrastructures that determine which languages are translated, whose art is recommended, or which histories are considered credible.

In the Global South, this dynamic has particularly deep implications. For example, as the 2025 UNESCO Global Report on Cultural Policies mentions, “Artificial intelligence systems pose new risks to cultural diversity and the visibility and circulation of diverse cultural expressions.”  This technology depends on training data scraped from local cultural production, but rarely reinvests in the communities that generate it. As a result, there is a pattern of digital dispossession as cultures and communities become content and datasets. The promise of AI for “innovation”, “inclusion” or “efficiency” is often offset by extractive data practices and unaccountable algorithmic governance.

However, AI is also changing the very notion of authorship and creativity. AI-generated cultural output is trained without consent to imitate human expression that has historically been rooted in experience and empathy. This raises critical questions about moral rights and intellectual ownership of cultural experiences. As the UN Secretary-General’s report on the role of new technologies for the realisation of economic, social and cultural rights stresses, “Many algorithms tend to reinforce existing biases and prejudices, thereby exacerbating discrimination and social exclusion. Data-driven tools often encode human prejudice and biases, with a disproportionate impact on women and minority and vulnerable groups that are the subjects of those prejudices and biases.”  The report emphasises that these technologies must be governed through frameworks that prioritise equality, non-discriminatory participation and accountability. The UNESCO Global Report on Cultural Policies suggests, “The digitization of cultural heritage must be accompanied by comprehensive policies to address governance, ethical risks and cultural data sovereignty.”

A feminist approach insists that power must be interrogated at every layer of this technological architecture, from who designs the algorithms, to who is represented in datasets, to who benefits economically from the deployment of these technologies. Feminist scholarship on AI ethics has consistently argued that systems designed without gendered, racial or cultural awareness risk amplifying precisely the hierarchies they claim to disrupt.  This is evident in how automated systems regulate online visibility for women, queer and racialised users who are more likely to face content removal, misclassification or harassment driven by biased moderation models. These dynamics mirror broader patriarchal and colonial logics, where visibility is both a privilege and a risk, and where cultural expression is scrutinised through algorithmic control.

The 1986 Declaration on the Right to Development establishes that development is a process that should enable all peoples to participate, contribute to, and benefit from economic, social, cultural and political progress.  When AI systems mediate this participation, they effectively become instruments of governance. Their design and deployment therefore determine whose development is realised, and whose is deferred. Cultural rights, which have been considered secondary to economic or political priorities, are in fact the foundation upon which inclusive development rests.

This submission situates AI as both a product and a producer of culture. AI systems are built upon human creativity and labour, yet they also shape future cultural forms by influencing what and who is visible or considered valuable. This influence extends beyond cultural production into education, journalism, entertainment and public discourse, shaping the narratives through which societies understand themselves. In this sense, AI is not merely a technological innovation, but a cultural infrastructure that can either democratise or colonise the global cultural commons.

 

Read the full joint submission here