Aller au contenu principal

On 26 January  2024, the International Court of Justice (ICJ) ordered provisional measures in the case brought by South Africa against Israel, determining the plausibility of genocide committed by Israel against the Palestinian people in Gaza. Among other measures, the court ordered Israel to take all steps possible within its power to prevent and punish direct and public incitement to commit genocide, an inchoate and punishable crime in itself under the Genocide Convention, the first human rights treaty adopted by the United Nations General Assembly way back in 1948. Israel was also ordered to preserve and prevent the destruction of evidence related to genocidal acts committed in Gaza.

From quoting a genocidal statement made by an Israeli official online to screening social media videos of Israel Defense Forces (IDF) soldiers in the courtroom, the ICJ case sheds an important light on the link between social media platforms and atrocity crimes. This is reflected in the chilling words of one of South Africa’s lawyers, Blinne Ni Ghralaigh, about “the horror of the genocide against the Palestinian people being livestreamed from Gaza to our mobile phones, computers and television screens – the first genocide in history where its victims are broadcasting their own destruction in real time.” But most importantly, the case raises serious questions about the role, responsibilities, or even the complicity – in legal and non-legal terms – of tech companies when their services are used directly or indirectly for spreading incitement to violence and genocide.

Since 7 October 2023, deeply disturbing dehumanising speech, genocidal rhetoric, and incitement to violence against Palestinians by Israeli officials and public figures have exploded online. The ICJ cited statements made by Israeli President Isaac Herzog, Israeli Defence Minister Yoav Gallant, and a post shared by then Minister of Energy and Infrastructure, Israel Katz, on X, which read: “The line has been crossed. We will fight the terrorist organization Hamas and destroy it. All the civilian population in Gaza is ordered to leave immediately. We will win. They will not receive a drop of water or a single battery until they leave the world.” Other Israeli officials and politicians made countless similar statements on social media. For instance, the Knesset’s deputy speaker called for the full destruction of Gaza on X (formerly Twitter): “Erase Gaza. Nothing else will satisfy us. It is not acceptable that we maintain a terrorist authority next to Israel. Do not leave a child there, expel all the remaining ones at the end, so that they will not have a resurrection.” The Law for Palestine, a non-profit, has collected over 500 hair-raising genocidal statements by various Israeli officials and public figures, even after the court instructed Israel to prevent and punish this crime.

Platform accountability and armed conflict

While states primarily bear this duty under the Genocide Convention, social media companies should certainly take note. The UN Guiding Principles on Business and Human Rights (UNGPs) makes it clear that companies should respect human rights, identify and mitigate harm, and remedy abuses wherever they operate. Such responsibilities become particularly heightened in situations of armed conflicts as companies risk “being complicit in gross human rights abuses committed by other actors.” Guiding Principle 23 of the UNGPs even recommends businesses to treat this risk as a legal compliance issue, and therefore, should conduct a “heightened” human rights due diligence to avoid and mitigate causing, contributing to, or being directly linked to these crimes or abuses. In addition to the plausible risk of genocide, a crime that can be committed in times of war or in peace, companies should account for other violations of international humanitarian law in contexts of armed conflicts and military occupation.

There is no singular format to how businesses can conduct such heightened due diligence but there are a few guides out there. For example, the United Nations Development Programme’s (UNDP’s) guide for Heightened Human Rights Due Diligence for Business in Conflict-Affected Contexts lists a number of “red flags” that should prompt companies to conduct such assessment in order to introspect on their impact on conflicts and mitigate foreseen harms. This includes records of serious violations of international human rights and/or humanitarian law, increased inflammatory rhetoric or hate speech targeting specific groups or individuals, strict control or banning of communication channels, distortion of facts, censorship, dissemination of propaganda and/or misinformation, and closure of the internet or websites.

There has been no shortage of red flags for tech companies since the start of Israel’s brutal war on Gaza. The ICJ took note in its ruling of the statement by 37 UN human rights experts on 16 November 2023, including the working group on the issue of human rights and transnational corporations, voicing the alarm over “discernibly genocidal and dehumanizing rhetoric coming from senior Israeli government officials, as well as some professional groups and public figures, calling for the ‘total destruction’, and ‘erasure’ of Gaza.” Most notably, the UN experts’ warning against the risk of genocide in Gaza was directed not only at states who bear this responsibility first and foremost, but also at private businesses who “must do everything [they] can to immediately end the risk of genocide against the Palestinian people.”

Similarly, the UN Committee on the Elimination of Racial Discrimination issued a decision under its Early Warning and Urgent Action Procedures on 21 December 2023, to warn against “the racist hate speech, incitement to violence and genocidal actions, as well as dehumanizing rhetoric targeted at Palestinians since 7 October 2023 by Israeli senior government officials, members of the Parliament, politicians and public figures.” This is not to mention the unprecedented carpet-bombing of state-sponsored disinformation online and ongoing communications blackouts in Gaza.

The most destructive bombing campaign

Tech companies, however, should not need warnings to assess their risks and culpability in the ongoing atrocities when the scale of death and mass destruction has been unprecedented in modern history. According to experts, Israel’s destruction of the Gaza Strip in the first two months of the war exceeded the razing of Ukraine’s Mariupol in 2022, Syria’s Aleppo in 2012-2016, and even the Allied bombing of Germany in World War II. As of 4 March 2024, at least 30,534 Palestinians have been killed in Gaza since 7 October 2023 and over 71,920 Palestinians have been injured. The number of Palestinian children killed in the first three weeks of the war alone surpassed the total number of other children killed in armed conflicts around the world since 2019. More than 80% of the population – a staggering 1.9 million Palestinians – have been internally displaced with no food, medicine, water, or safe shelter. Israel’s war has also killed more Palestinian journalists than any other conflict. The same can be said about the killing of UN workers.

Despite these unprecedented atrocities, none of the social media platforms, including Meta, YouTube, X, and TikTok, or messaging apps such as Telegram have publicly conducted and communicated their efforts to mitigate risks stemming from this carnage. Instead, every single one of these platforms is littered with war propaganda, dehumanising speech, genocidal statements, explicit calls to violence, racist hate speech, and records of Israeli soldiers bombing mosques and civilian homes out of boredom, torturing and humiliating blindfolded Palestinian detainees, and celebrating war crimes, all available on mobile screens.

While this content is hardly moderated, Palestine-related content, including documentation of the atrocities in Gaza, has been systematically and disproportionately removed. Meta, which has the lion’s share of censoring Palestinian voices among all other platforms, is fully aware of its over-moderation problem of Palestinian content as outlined by a human rights due diligence report it commissioned in 2021. It is duly aware of a second problem it has of Hebrew language content, and particularly hate speech, being significantly under-moderated simply because it doesn’t have fully operational hate speech classifiers to detect and remove such content. Despite knowing full well, Meta has failed to mitigate or address any of the negative human rights impact its content moderation policies and actions has inflicted on Palestinians.

But the issue here is larger than just content moderation. Assessing the illegality of a piece of content and how it may facilitate or contribute to the perpetration of atrocity crimes is merely one dimension of understanding the role platforms play in armed conflicts. What has been rarely assessed, whether by companies themselves or independent auditors, is how these platforms influence, contribute to, or exacerbate conflict dynamics – especially in a context of military occupation and apartheid where power asymmetries between the occupier and the occupied are detrimental and pronounced. Google, for example, which is profiting from aggressive targeted war propaganda ads from the Israeli government on YouTube, may find that the Israeli ads on their own merit are not in violation of its content policies. But by the sum of all parts, i.e. the context in which these ads are allowed, the company is giving a platform and influence to a nuclear-armed power with a dismal record of international law violations to disseminate and shape a public narrative globally that justifies its current genocide in Gaza.

Businesses can never be neutral actors in conflicts, and tech companies are no exception. Whether through action or by omission, they have a record of fuelling conflicts, as in the case of Myanmar and Ethiopia. In the same vein, they have long turned a blind eye to the Israeli occupation and its violence, online and offline, and undermined Palestinians’ rights. It is a glaring moral and legal failure that social media companies continue to fail in their responsibilities despite the live-streaming of genocide on their platforms in Gaza. The ICJ case should set off their all-siren alarms.

 

Image: UN Photo/ICJ-CIJ/Frank van Beek

Marwa Fatafta is MENA Policy Manager at Access Now. She has written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts and humanitarian contexts and their impact on historically marginalized and oppressed communities.