Skip to main content
Download

“Some of our activists are drafting statements with ChatGPT, and I’m worried about what approach we should take.”

Many organisations are likely having similar concerns. With the emergence of various generative AI services—such as chatbots like ChatGPT and Gemini, and tools that create images, music, and videos—an increasing number of citizens are using them for both professional and personal purposes. Civil society activists are no exception. However, while they are being utilised, usually based on individual judgment even for work-related tasks, almost no organisation currently has an organisational-level policy on generative AI.

There are many points that civil society organisations (CSOs) must consider when using generative AI. For example, if factual inaccuracies (hallucinations) from generative AI are included in an organisation’s official documents, the organisation’s credibility can be severely damaged. Security issues may arise if personal or confidential information is uploaded to unreliable commercial services. Furthermore, the output of generative AI might contain biases that conflict with the organisation’s values. The process of drafting a statement using generative AI may exclude aspects crucial for activist capacity building and internal organisational deliberation. If activists use AI tools based on individual choice without an organisational policy, there is a high likelihood that issues beyond the organisation’s control will emerge.

However, in the Korean context, there is a lack of guidelines available regarding whether it is appropriate for civil society organisations to utilise generative AI services, what principles and policies should govern their use if they choose to do so, and what guidance can be referenced from a human rights perspective. Moreover, the current status of which AI tools activists are using for which tasks has not been documented. This guide originates from the realisation that we need to help civil society organisations and activists establish generative AI policies and properly utilise these tools when necessary.

To create this guide, we conducted a survey on which AI tools are actually being used for which tasks, how useful generative AI is perceived to be, and what problems users are experiencing. We gathered opinions not only from domestic activists but also from activists worldwide through the APC network. While the sample size is limited, restricting its statistical significance, we were able to confirm the real concerns and shared understanding of the issues felt by activists. Even those who use generative AI minimally responded, sharing their thoughts.

Furthermore, we held workshops with civil society and labor union activists focusing on generative AI. We shared the survey results and a preliminary policy framework, allowing participants to exchange their experiences and perspectives. Through this process, we reconfirmed that the act of honestly sharing feelings and concerns is crucial, rather than simply reaching a consensus. The policy framework presented in this guide is merely a starting point; the process of each organisation creating its own policy that reflects its reality and the voices of its activists is paramount.

While some activists use generative AI with interest, many others still feel uncomfortable with generative AI itself. We clearly state that this guide is not intended to encourage the use of generative AI. The fact that the development of major generative AI models and the provision of services are exclusively controlled by Big Tech companies is also a concern. Although this guide focuses on the commercial generative AI services currently in dominant use, we deeply empathize with the need to overcome these structural limitations.

Despite various limitations, we hope this guide will be of some help to organisations and activists currently contemplating policies related to generative AI.

Download this guide here.