By Deborah Brown and Rafik Dammak Publisher: APCNewsPublished on
Page last updated on
Among the more than a dozen reports from the UN High Commissioner for Human Rights that were discussed last week at the Human Rights Council (HRC), one addressed a topic very relevant to internet policy and regulation: how to protect and promote human rights while preventing and countering violent extremism. Aside from being one of the most significant challenges to the enjoyment of human rights, peace, and security today, countering violent extremism (CVE) and protecting human rights defenders presents an internet governance challenge, especially in the Middle East and North Africa (MENA) region.
While violent extremism is a deeply rooted societal problem, there is often a search for a technical solution. CVE measures often rely on communications surveillance, big data, and the use algorithms to detect violent extremism online. They involve filtering content, freezing and removing social media accounts, and in extreme cases, shutting down or disrupting internet access, violating the fundamental rights to freedom of expression and privacy.
Countering violent extremism at the UN
This latest report from the UN High Commissioner for Human Rights for discussion at the HRC’s current session focuses on best practices and lessons learned on how protecting and promoting human rights contribute to preventing and countering violent extremism. It builds on previous efforts at the HRC and elsewhere in the UN system. At its 30th session in October 2015, the HRC passed a resolution expressing deep concern at the profound threat posed to the realisation and enjoyment of human rights by acts resulting from violent extremism and by the increasing and serious human rights abuses and violations of international humanitarian law by violent extremist and terrorist groups. The resolution called for a panel discussion at the HRC’s 31st session in March 2016 to discuss the human rights dimensions of preventing and countering violent extremism. The resolution, sponsored by Albania, Bangladesh, Cameroon, Colombia, France, Iraq, Mali, Morocco, Peru, Turkey, Tunisia and the United States, was divisive and ultimately went to a vote.
In a parallel but complementary process, in January 2016 the UN Secretary-General launched the Plan of Action to Prevent Violent Extremism, which emphasises the need for a comprehensive approach to countering terrorism and violent extremism that goes beyond “law enforcement, military or security measures to address development, good governance, human rights and humanitarian concerns.”
Risks of efforts to counter violent extremism for internet policy
The incitement to violence fuelled by extremist ideology is of pressing global concern, yet the problems with the efforts to counter violent extremism are multifaceted. For example, while there are international human rights standards around freedom of opinion and expression, freedom of belief or religion, and hate speech, there is no common definition for “violent extremism”. As such, when interpreted broadly, efforts to prevent or counter it can be inconsistent with human rights norms, outlawing ideas or beliefs rather than violent actions. Freedom of thought is a non-derogable right and as the High Commissioner’s report points out, some laws seek to prevent “extremist views”, which is incompatible with the international human rights framework.
CVE efforts rely on profiling communities and individuals who are deemed to pose a risk, which can lead to stigmatising certain religious groups and cultures. This can actually have the opposite effect of alienating them and fuelling extremism. Similarly, as the report points out, because men and boys are seen as the prime recruits for violent extremism, and because women are seen as playing traditional roles in their families, CVE initiatives often allocate resources towards boys and men, which can further marginalise women and girls and reinforce gender stereotypes. Finally, there is a lack of studies or research confirming the impact of CVE policies, meaning that current initiatives do not seem to be evidence-based.
The internet is viewed as a platform for both recruitment for violent extremism and for countering it through counter-narratives. Measures to counter violent extremism online (filtering, blocking access to certain platforms, network shutdowns, removal of accounts) are often inconsistent with international human rights law and the principles of legality, necessity and proportionality with respect to freedom of expression. Measures like these are taken at a mass scale and fail to demonstrate how the perceived benefits outweigh the importance of the internet as a tool to maximise the diversity of voices in discussions. Or, as the High Commissioner’s report puts it, such measures are “at odds with the individualised assessment required under human rights law.”
In addition, CVE initiatives rely on surveillance in order to identify targets. Though “targeted” in the sense that they are looking for specific types of people, the vagueness of the definition of what constitutes violent extremism, and the broad way it is interpreted by some governments, means that mass surveillance is needed in order to identify targets. Besides being at odds with the individualised assessment required under human rights law, such measures are based on stereotypes and prejudices against specific groups. The implication of this is that machine learning and other techniques can be designed with built-in biases, introducing false positives which may impact large numbers of innocent people.
The High Commissioner’s report emphasises that if they are to be effective and sustainable, these initiatives must comply with international human rights law and not be discriminatory. The report follows up on many of the concerns that 58 civil society organisations (including APC) raised in a joint submission to the HRC. There are still some points that could have come out stronger in the report.
First, the internet is largely examined in the context of its role in either facilitating or countering violent extremism (understandable given the scope of the report). However, some of the measures described in the report are inherently disproportionate (such as network shutdowns) and impact a wide range of rights, from freedom of expression and assembly to the rights to education and health. Such measures can never be justified, even in the name of countering violent extremism. More accountability is needed regarding the use of such approaches, which are not subject to enough scrutiny.
Second, while the report looks at underlying causes of violent extremism from a domestic perspective, citing economic, social and political policies as creating conditions of exclusion, poverty and disillusionment, it does not consider the impact of foreign policy on violent extremism. This is a huge factor and it is worth noting that technical solutions cannot respond to deep-rooted issues resulting from social, economic and political causes.
Third, the High Commissioner’s report could have examined the role of the private sector in more detail, in particular the extent to which companies are taking voluntary measures to counter violent extremism. There is a lack of transparency regarding companies’ cooperation with governments to take down content and share user data, which comes about both through government requests and companies’ own terms of service. Previous experience with companies, such as Facebook and Youtube, does not indicate that private sector has consistent guidelines regarding content removal, or the right capacity, capabilities and adequate human resources, in the absence of appropriate fair appeal systems.
As the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression noted in his recent report, internet service providers, social networking platforms, search engines, cloud service providers and other companies are facing increasing government requests for user information based on local laws and regulations. State oversight of these requests varies, ranging from prior judicial authorisation to high-level executive approval to none at all. The Special Rapporteur notes that “there are also gaps in corporate disclosure of statistics concerning the volume, frequency and types of request for content removals and user data, whether because of State-imposed restrictions or internal policy decisions.”
Ranking Digital Rights notes that certain types of content are problematic and deserve to be addressed, but there are concerns about accountability, fairness and consistency. Through terms of service and other community standards-type documents, companies already disclose information about the circumstances in which they restrict content. Ranking Digital Rights recommends companies should report data about the volume of actions they take to enforce these rules with respect to different types of content. By reporting data related to terms of service enforcement, companies can demonstrate the extent to which they are addressing concerns related to particular forms of content.
Proactive programmes from companies, like Google/Jigsaw’s “Redirect Method”, to dissuade potential recruits from joining ISIS, also raise human rights concerns. The programme, as described by an article in Wired, “places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for. Those ads link to Arabic- and English-language YouTube channels that pull together preexisting videos Jigsaw believes can effectively undo ISIS’s brainwashing – clips like testimonials from former extremists, imams denouncing ISIS’s corruption of Islam, and surreptitiously filmed clips inside the group’s dysfunctional caliphate in Northern Syria and Iraq.” While this method does not differ dramatically from those used for targeted advertisements or political campaigns, it does differ in one significant way – rather than selling products, Google/Jigsaw is openly venturing into selling ideologies, with little transparency or oversight of its methods.
Relevance for internet governance in MENA
While the UN’s discussions are global in nature, this discussion is especially relevant for the MENA region, as people living in MENA are more likely to experience violent extremism than those is some other regions and because many CVE initiatives target citizens in and from the region. There is the risk that the measures described in the report may be used by governments in the region without any adequate oversight and/or safeguards, worsening the current situation of digital rights within MENA. CVE initiatives can legitimise existing practices and add more constraints to the current or pending legislation, such as cybercrime laws. Such measures can also be used to target other groups, such as activists and political opponents, based on an overly broad interpretation of terrorism or hate speech. For example, in Tunisia, which stopped any internet filtering after the 2011 revolution, there is a push to move toward enacting more control and filtering, arguing the need to confront terror threats, as the country faced several deadly attacks in the last two years and is one of the top countries exporting “jihadists” to join ISIS.
Governments should heed the recommendation of the High Commissioner, that:
Measures to prevent and counter violent extremism online should clearly set out the legal basis, criteria and guidance on when, how and to what extent online content is blocked, filtered or removed. States should also review their laws, policies and practices with regard to surveillance, interception, collection and retention of personal data in order to ensure full conformity with international human rights law. If there are any shortcomings, States should repeal, amend or promulgate such laws to ensure that there is a clear, precise, accessible, comprehensive and non-discriminatory legal framework. Information and communications technology companies should allow surveillance of individuals on their platforms only when ordered to do so following judicial intervention.
During the discussion of this report at the HRC, Morocco and the US announced the formation of a Group of Friends on Countering and Preventing Violent Extremism, which was officially launched on 7 September 2016, and of which they are co-chairs. The GoF intends to promote “substantive dialogue on the human rights dimensions of preventing and countering violent extremism in Geneva with the aim of sharing lessons learned and best practices, promoting international cooperation and collaboration, and developing and implementing approaches to prevent and counter violent extremism,” and “to serve as a platform for promoting this agenda in Geneva and work with other States, National Human Rights Institutions, experts, and civil society.” They see the need for “effective coordination and information sharing within the UN and between States, the relevant UN entities and the relevant international, regional, and sub-regional organizations and forums.” The Geneva Centre for Security Policy (GCSP) will advise and support the work of the GoF. In December 2016, the GCSP will co-host an important high-level PVE event in New York, and several more such events and comprehensive courses are being planned for 2017.
While the GoF is not leading any further action at this current HRC session, a Mexico-led resolution on the protection of human rights and fundamental freedoms while countering terrorism may also address aspects of “violent extremism conducive to terrorism”. Other than at the HRC, this issue is likely to continue to be high on the agenda of a variety of intergovernmental and internet governance bodies, such as the annual Internet Governance Forum in Guadalajara and the Security Council’s Counter-Terrorism Committee Executive Directorate.