Tackling online hate speech in Africa and beyond: “We can't trust Big Tech to abide by its own rules”

By Fungai Machirori

Almost 30 years ago, hate speech in Rwanda led to one of the most devastating and documented cases of modern mass genocide. At that time, radio was still a powerful tool of mass communication and played a significant role in the dissemination of tribally charged hate speech. Today, the internet has taken on a similar role with its exponential potential for digital virality.

Digital communications in Africa and beyond have set a new precedent in the capacity for mass dissemination of hate speech that endangers the safety of citizens. “Online hate speech continues to grow at the same rate as the evolution of technology, and more specifically, digital platforms,” observed Racheal Nakitare of the Kenyan chapter of the International Association of Women in Radio and Television (IWART), an APC network member. Hate speech is ascendant as mobile penetration rises everywhere, including in Africa. According to the GSMA (Global System for Mobile Communications), by the end of 2020, 495 million people in sub-Saharan Africa had subscribed to mobile services, representing 46% of the region’s population. WhatsApp remains the most popular social media messaging application in Africa.

In the run-up to Kenya’s elections in August, hate speech and calls for ethnic violence on social media became a national issue. In Ethiopia, calls for violence against ethnic minorities abound on social media as the Tigray conflict has continued. The conflict between the Ethiopian federal government and the Tigray People's Liberation Front (TPLF) has also led to the shutdown of media organisations and the detention of journalists and bloggers. In Ghana, the country’s mooted anti-LGBTIQ+ bill has led to increased homophobic sentiment online, while anti-LGBTIQ+ sentiment has also been observed in Tunisia where security forces have targeted LGBTIQ+ activists through doxxing (malicious, non-consensual sharing of their private details and other information identifying them on social media). In South Africa, online hate speech has heightened xenophobic sentiment against non-South Africans working in the country. Zimbabweans, one of the largest contingents of foreigners in South Africa, have experienced some of the worst consequences of xenophobia with hashtags such as #ZimbabweansMustFall and #PutSouthAfricansFirst often mobilised to incite xenophobic sentiment against them and other groups.

Hate speech prepares the ground for hostility

Nakitare also noted that since platforms like Facebook, Twitter and TikTok are now often the main sources of information and communication for many, they are setting a dangerous precedent for the scope and reach of disinformation and hate speech. Hate speech also goes largely unflagged in encrypted and closed social spaces like WhatsApp.

Irene Mwendwa of APC member Pollicy, a feminist collective of technologists based in Uganda, observed that several studies have highlighted how online hate speech and violence halt women's participation in politics and economic activity. “These specifically affect the economic potential and autonomy of many, since women's roles, skills and their full potential online remain underutilised in most African societies,” she pointed out.

Rasha Younes, a researcher with Human Rights Watch on LGBT rights in the Middle-East and North Africa, said that many LGBTIQ+ people who report hateful posts often get an official response from tech platforms claiming that the content does not violate their company guidelines.

Paula Martins, policy advocacy lead at APC, pointed out that digital spaces like social media platforms are simply “the new kid on the block” when it comes to hate speech. “We need to recognise that this problem has long preceded the advent and expansion of the internet and of social media,” she explained. And just like its offline manifestation, solutions are not easy to implement but there are efforts under way to try to stem its impact.

The #ChallengeHateOnline Twitter campaign

As it emerges and widens its scope, global efforts are also being put into tackling online hate speech. A key example is the United Nations Strategy and Plan of Action on Hate Speech launched in 2019. One of its key commitments is to engage with social media companies on how they can support UN principles and actions against hate speech, including engaging and supporting its victims. The UN also observed the first ever International Day for Countering Hate Speech on 18 June 2022.

To mark this important inaugural day, APC ran a Twitter campaign under the hashtags #ChallengeHateOnline and #BastadeOdioenLínea, initiating a holistic conversation that engaged online users and other stakeholders on how to effectively address online hate speech. APC's media outreach lead, Leila Nachawati Rego, noted that the campaign allowed APC to incorporate strategic issues and approaches that went beyond just user responsibility.

“Many messages around the main UN campaign revolved around user responsibility — ‘Don’t share this type of message’, ‘What can you do?’, ‘You have a responsibility in not propagating hate speech’,” said Nachawati. “The content around our hashtag included a more comprehensive, nuanced and multifaceted approach to the analysis of the root causes, responsibility by governments and corporations, and community responses to this issue.”

The APC campaign successfully balanced the day’s focus by curating a more comprehensive view of the issue. It featured robust contributions from global civil society organisations and actors who engaged on various aspects of online hate speech, including Digital Rights Nepal on doxxing, the Global Fund for Women on systemic structures supporting hate speech and Pan Africa ILGA on how hate speech is affecting various marginalised communities.

Tricky challenges in tackling this crisis

Technology companies now do offer channels for users to flag hate speech. However, Martins of APC noted that these remain “difficult to access, the response is delayed, and there is not real interaction with complainants in terms of follow up of cases.”

Also, what constitutes hate speech can itself be hard to define. Lillian Nalwoga at Collaboration on International ICT Policy for East and Southern Africa (CIPESA), an APC member based in Uganda, observed that without context of what is defined as hate speech from country to country, acting against it becomes harder. Nalwoga also pointed out the problem of jurisdiction. “If a hate speech crime has been committed elsewhere and someone reports it, for instance, from Uganda, where would you go to report it?”

Martins also made the point that social media companies’ content moderation policies, unless otherwise required by statutory regulation, will as a rule prioritise sustaining their business models. And their algorithms, often deploying artificial intelligence (AI), were developed without taking into consideration their impact on hate speech. These algorithms have been known to censor activists and to be biased against Black people, for example.

TikTok's African content moderators recently complained about having to review psychologically taxing videos of suicides and animal cruelty, all the while earning less than US$ 3 per hour. Earlier this year, a Kenyan former Facebook content moderator filed a lawsuit against the company citing poor working conditions and remuneration, among other issues.

Facebook refers to its AI models to detect hate speech as being “super efficient”, but in light of major examples depicting otherwise, this is debatable. Rosie Sharpe at Global Witness noted that their investigations into hate speech are deliberately set up to be easy for Facebook’s AI to detect, but such content still manages to get approved without much gatekeeping. In the run-up to the Kenyan elections in August, Global Witness submitted advertisements on Facebook with deliberate hate speech and incitement to tribal violence to test the efficiency of the platform’s content moderation. The advertisements, in both English and Swahili and featuring calls to committing atrocious acts like rape and slaughter, still passed the platform’s checks and were approved for publication.

“We predicted that the Swahili advertisements would all be accepted by Facebook and that the English ones would all be rejected,” explained Sharpe, the assumption being that Facebook’s AI is better in English than in non-English languages. This also raises concerns of how calls for more linguistic diversity in content moderation may not actually help much, as it does not seem to solve the issue of harmful content being approved for publication.

Dealing with hate speech will always be a very hard endeavour, according to Martins, since it is about balancing freedom of expression with the right to non-discrimination (among others). “We have seen too many laws that are passed under the justification of curtailing hate speech or online violence, but which are actually aimed at controlling speech,” she said. “And these laws tend to always be applied in a manner that will end up silencing or censoring groups in situation of vulnerability and political dissidents.” The most recent example of this is Uganda’s new amendments to the Computer Misuse Act, which are rife for misuse to quell dissent against the powerful.

Responding to hate speech requires nuanced actions by governments, companies and communities, but without addressing its underlying reasons, hate speech will only find another place to flourish if not online.

Some ways forward

Martins observed that in the absence of a complete restructuring of social media companies’ business models, “palliative” measures need to be taken. These include more transparency around algorithms and content moderation, ensuring moderation policies abide by international human rights standards, training moderators better, and amplifying moderation in local languages and contexts. “The litmus test for social media platforms is the experience of their most marginalised user,” added Mwendwa of Pollicy.

States and national actors also have a role to play.

Kenya’s National Cohesion and Integration Commission (NCIC), which monitors online hate speech, offers a glimpse of what is possible. Before the Kenyan elections, the NCIC served Facebook a seven-day ultimatum to deal with hate speech and incitement to violence on the platform, failing which it would be suspended in the country. The commission, however, does not possess the power to prosecute cases, and Nakitare pointed out that there needs to be more activist collaborations with institutions like NCIC on these issues.

In a similar context of upcoming elections next year in Zimbabwe, Jestina Mukoko of the Zimbabwe Peace Project sees a role for a commission like Kenya’s NCIC. “The National Peace and Reconciliation Commission should take this up,” she asserted. “Otherwise electioneering is going to be toxic and particularly affect groups like young women who have already largely become silent for fear of being attacked online.”

Organisations like Pollicy are exploring how governments can be engaged to curb online hate speech against women politicians and how AI can impact democracy, such as through campaigns to promote women leaders' digital resilience and increase political participation.

“Ultimately, what's needed is for governments to regulate Big Tech companies to hold them accountable,” concluded Sharpe. “The fact that we reveal time and time again that social media companies’ policies are not enforced, and not worth the paper that they're written on, shows to me that we can't trust Big Tech companies to abide by their own rules. And instead, we need governments to step in and hold them to account.”

Amid all these challenges, not all is lost. Liz Orembo, a trustee with the Kenyan APC member and think tank KICTANET, believes that society is becoming more conversant with hate speech and recognising how it impacts our politics. The generally peaceful elections in Kenya have attested to this and ring in a note of hope in the fight against this bane.



« Go back