What are the consequences of blocking access to hateful content? What role do individual internet users play in perpetuating discrimination online?
Although online hate speech has been a growing concern for many years, recent cases have demonstrated the complexity of this issue, and its impact on cultural, political, social, and economic well-being. There is also a growing tension over how to respond to online expression that does not qualify as hate speech, but which nonetheless contribute to offline victimization, marginalisation, and the violation of basic human rights.
While the UN and other bodies have strict criteria for identifying and responding to hate speech, individual countries continue to censor online expression in a manner that clearly does not meet this criteria. After an anti-Islam film sparked unrest and violence in Libya and Egypt, governments in those countries demanded that Google block access to the video in the country-specific versions of YouTube, contributing to the growing number of ‘splinternets’ around the world. In Pakistan, where Google has not yet developed a country-specific version of YouTube, the government responded to protests by blocking all access to the video sharing platform.
In a statement on their blog in September, APC member Bytes for All Pakistan pointed out the dangers of using censorship as a tool to combat hateful online content:
“‘Innocence of Muslims’ is not the first example of anti-Islam or anti-religious online content, and it won’t be the last. It is impossible to filter and censor out millions of such opinions that may eventually result in banning of all communication technologies and/or cripple critical infrastructure. In addition, such desperate attempts to ban disagreeable content only serves to draw attention towards it.”
Not only does censorship fail to deter acts of hateful expression online, it may in fact encourage further hatred and marginalisation. The recent unmasking of Reddit troll Violentacrez has demonstrated that individual acts of hate are in many ways enabled by a culture of exploitation. As Whitney Phillips describes in an article on trolling, the current media and cultural landscape very much supports individual acts of hatred:
“[..] trolls are cultural scavengers, and engage in a process I describe as cultural digestion: They take in, regurgitate, and subsequently weaponize existing tropes and cultural sensitivities. By examining the recurring targets of trolling, it is therefore possible to reverse-engineer the dominant landscape.”
This culture of hatred is not limited to racial or religious intolerance. Amanda Todd’s recent suicide, after years of online victimization by her peers, demonstrates how easily online spaces and identities can become a tool for oppression.
While the man who sexually harassed and blackmailed Todd is being pursued by local police and by <Anonymous hackers the record of her victimization, and the resulting bullying has raised serious concerns over the dangers of online activity:
“The bullying of Amanda Todd shows how online and offline behaviour are interwoven, how face-to-face cruelty slips into online performance. On the internet, the victim’s own memory of events is not the one that endures. Instead, the story is written by the tormentors, their ownership of the narrative itself a form of torment.” (Sarah Kendzior, Al Jazeera)
The internet is not a benign tool. While it has the potential to improve individual rights and freedoms, there are very real threats posed by this form of communication, which is dominated by a growing culture of intolerance, oppression and victimization. As users of the internet, and members of a global civil society, we all have an obligation to change the dominant discourse, and make online spaces safe for vulnerable individuals and communities.
Affordable access to the internet is only part of the response needed. Two thirds of people around the globe are still not connected to the internet and many of those accessing the internet for the first time experience far less freedom compared to that of early internet users. It is important to create safe spaces for new internet users to share information, access content and participate in online discourse. Digital storytelling and campaigns such as Take Back the Tech! are essential to creating safe spaces online, often empowering victims of abuse to take back ownership of their story. Secure online communication (SOC) training can help users manage their own safety online, and is particularly important in countries where online monitoring occurs.
At an individual level, internet users must take responsibility for how we contribute to online culture. We need to be conscious of the expressions we make, the identities we foster, and the ideas we spread through our online presence. Our contribution to the online consciousness amounts to a great deal in the offline world, and in the movement towards democratisation.