Skip to main content

The Association for Progressive Communications (APC), as a networked organisation with a diverse membership rooted in the Global South, has played a significant and consistent role in publishing content that amplifies the voices and perspectives of marginalised communities in the context of digital inclusion and digital and internet rights spaces and processes.

As a human rights and feminist network working to enable social, gender and environmental justice for all people, APC joins forces with others to defend a human rights and climate justice-based approach to artificial intelligence (AI). We understand that the advance of these technologies poses many risks and negative impacts – from their calamitous environmental impact, to their use for surveillance and the violation of human rights.

We believe that content made by people – which reflects their diversity in experiences, positionalities and voices – should be at the core of our content production approach and editorial policies. As part of this journey and considering ongoing developments, APC states that we discourage the use of large language models (LLMs) or other generative AI tools in the development of content for our platforms and channels. In this policy we present the reasons behind this decision, recognising the importance of finding a balance on the issue and fostering a culture of open dialogue. We believe that the different uses of AI and motivations behind them can be made visible to our audiences and discussed within our community so we can build on the collective knowledge to keep our approach to this issue updated. 

 

Prioritising people creation 

We have observed a significant increase in the use of generative AI tools to produce and review content outputs such as research, articles, summaries, pitches, translations, images or illustrations, among others. We believe such use is detrimental to the development of people’s capacities and creative thinking, hinders the amplification of diverse voices from the ground by homogenising approaches, narratives and styles, and is built on the years worth of labour of researchers, writers, activists, advocates and artists with lack of proper financial compensation or acknowledgment of sources and authorship. 

This goes along with the negative impact that generative AI has on the remuneration of content-creation professionals who are already often underpaid, such as writers, proofreaders, editors, translators, artists, etc. It is also important to note our lack of trust in the procedures for consulting and crediting sources, and in the fact-checking processes embedded in these tools. In this sense, we are also mindful of the fact that using such tools may lead to using sources of information in one's writing that may not be credible or factual, and that integrity of information is expected from authors in the production of content for APC platforms.

Generative AI platforms can not only perpetuate but even amplify societal biases, predominantly gender and racial ones, through user interactions, while deepening prejudices and discrimination. These tools are being developed in specific cultural and demographic contexts, reflecting their way of operating – their way of “reasoning" is far from reflecting the diversity of our communities and of the world.

This is exacerbated by user behaviour that exploits AI capabilities to create problematic content. For example, AI tends to favour certain demographics, particularly Caucasian characteristics, during image upscaling processes. This phenomenon, known as colourism, in addition to ageism, highlights the disturbing tendencies of AI algorithms to reinforce harmful stereotypes.

In keeping with APC’s vision, mission, values and editorial policy – while also noting that environmental justice is one of the pillars of our current strategic plan – APC staff, contractors and contributors are strongly discouraged from producing content to be published on APC channels, including articles, blog posts, translations, videos, podcasts or illustrations, using LLMs or other generative AI tools.

As a diverse and grounded community, APC values content that has been produced as the result of a creative human process, which is nurtured by the lived experiences, human interactions, expertise and diversity of the people and organisations that make up the network, and contributes to strengthening our collective intelligence.

Also, as people-centred technology innovators and practitioners, APC believes that the internet is a public resource, and it is committed to promoting alternative infrastructure and economic models that contribute to the public commons. APC values local initiative and ownership, open content, open standards and free/libre and open source software (FLOSS), and technology solutions that are appropriate and affordable, which are not yet integral characteristics of AI tools.

 

Acknowledging contradictions and fostering a culture of transparency 

To the extent that we recognise all these problems posed by these tools, we also acknowledge that the use of AI tools for content production can happen in different ways, on different scales, and for different reasons. Like other tools, their use can have positive and negative implications at the same time, and motivations can vary greatly. We know that tools like these can be a concrete and accessible alternative, for example, for authors from the Global South who make the generous effort of writing in non-native languages for APC channels, such as English or Spanish. As another example, we are aware that people with disabilities also use AI as an assistive tool in their work.

APC will not demand that contributors avoid using AI tools but, as we believe transparency is crucial, APC does expect to be informed about the extent of the uses of such tools. In these cases, APC will prioritise transparency for our audiences, with a clear public disclaimer that the content was produced with the intervention of AI tools, specifying the nature and extent of the use of AI as much as possible. 

We believe that keeping the dialogue open and informing our audiences about how and when AI tools were used is the best way to strike a balance between our own position on the issue and our communities’ autonomy to position themselves differently. At the same time, a culture of transparency can help us evolve this policy based on open dialogue, as well as help us identify any gaps leading to their use and think of alternatives to address them. 

 

This policy in practice

APC appreciates the efforts of its writers to craft stories about the nuanced lived realities of the communities they work with and around, and we are committed to:

  • Work with the authors to amplify the voices and experiences of the communities highlighted in their stories, also ensuring that the published articles meet APC standards throughout the editing process. 

  • Offer proofreading, which will always be carried out acknowledging that many authors may not be writing in their first language since the APC website is available in English, Spanish and French only.

APC reserves the right to assess publications on a case-by-case basis and may decide not to publish content that has been entirely or mostly produced using AI tools at our discretion. 

Building on the collective knowledge to navigate new challenges on emerging technologies, we note that this policy will need to change as well, and we will update it as needed.

APC understands that generative AI technologies not only are here to stay, but will become consistently integral to most technological developments, and we believe working collaboratively is the best path to mitigation strategies and to harness AI's potential in a way that promotes fairness and inclusion.

In parallel, we will continue to amplify the debate on the implications of using AI, and defend a human rights, ethical and climate justice-based approach to it, using strategic communications to support our community advocacy efforts. 

 

This policy draws on several sources including APC’s vision, mission, values and editorial policy and FLOSS policy, as well as Global Voices' policy on AI and GenderIT.org pitches guidelines on the use of AI.