Facebook and the monetisation of hate: The way forward for holding platforms accountable

Image: "Smartphone" by Arjan used under CC BY-NC 2.0 license (https://flic.kr/p/izbmsy) Image: "Smartphone" by Arjan used under CC BY-NC 2.0 license (https://flic.kr/p/izbmsy)
Author: 
APC
Publisher: 
APC

The recent revelations by a Facebook whistleblower about the company’s decision to prioritise profit over user safety and human rights confirm concerns and trends that APC and other civil society organisations have observed and raised over the years regarding Facebook’s operations and implementation of their community standards.

Facebook’s decisions of knowingly allowing hate speech and misinformation to stay up on its platform to increase engagement, not implementing safety standards across the board and allowing users to violate community standards have extremely negative impacts on the human rights of its users and democratic processes worldwide. It is imperative to adopt regulatory and other measures targeting Big Tech companies worldwide, including Facebook, oriented to ensure that the internet contributes to strengthening the exercise of human rights and democracy.

Background

In September 2021, in complaints made to the US Security and Exchange Commission against Facebook, whistleblower Frances Haugen stated that, among other things:

  • Facebook incentivises angry, polarising content including hate speech, violent speech and misinformation knowingly since it leads to more engagement.

  • It does not offer the same safety systems in all the languages in which the platform is used or the countries it is available in as it is not cost effective. Further, safeguards built for hate speech and misinformation in the US were discontinued after the 2020 election since it was an impediment to increasing engagement.

  • Facebook privileges high-profile accounts to operate with impunity and its subsidiary, Instagram, is knowingly platforming harmful material that is endangering the mental health of young people.

Most of these issues have been long raised by researchers, users and civil society organisations.[1] The revelations, which are Facebook’s own internal research, simply make even more explicit that Facebook’s management is aware of the consequences of its policies and have consciously decided to ignore the risks of serious human rights violations that they may entail in favour of profit.

Unchecked hate speech and misinformation

Facebook has a history of allowing hate speech and misinformation to run rampant on their platforms. In 2018, the United Nations Fact Finding Mission on Myanmar pointed to Facebook’s role in the genocide of Rohingya Muslims, with Facebook admitting it was used to incite violence in the country. In India, APC’s research shows that Facebook has been a vital platform for the spread of hate speech, misinformation and calls for violence against Muslims during the COVID-19 pandemic last year. That Facebook was aware of anti-Muslim content on its platform comes as no surprise; a WSJ investigation in August 2020 showed that Facebook had knowingly allowed hateful content from members of the ruling party in India to stay on its platform, and there have been several calls from civil society, including APC, for Facebook to conduct an audit of hate speech on its platform and take immediate action against such content. Yet little has been done so far. This lack of action on the part of Facebook is further reiterated in the Palestinian context, with a report from 7amleh finding that 85% of its respondents were subject to hate speech on Facebook with no action taken while during the same time, Facebook consistently cooperated with Israeli state authorities to censor Palestinian voices on the platform.

Apart from incentivising hateful content, the revelations show that Facebook does not offer safety standards or classifiers to weed out harmful content in different languages. In the past, the additional harm faced by minority language speakers due to cultural and language barriers in Facebook’s operations and policies has been pointed out to Facebook by APC and others, especially in countries like Myanmar and India where a large part of the hate speech and misinformation happen in local languages.

Whitelisting accounts

The documents also show that Facebook privileges or “whitelists” certain accounts belonging to celebrities, politicians and journalists who are given an “XCheck” and who are allowed to post content that violates Facebook’s community standards. This troubling policy is in line with reports from last year on Facebook India’s refusal to apply its community standards on hate speech against political actors in the country. In its open letter to Facebook in response to these reports, APC highlighted the importance of Facebook remedying any bias in its policies and operations to ensure neutrality. Allowing powerful actors to violate community standards with impunity will severely impact human rights (including freedom of expression) by silencing survivors and targeted groups and the right to privacy of users, and will put the lives of people, especially those already marginalised, in physical danger. Although the Facebook Oversight Board has now said that it will review the XCheck system, it is unclear if any change will result from this.

Failure to tackle gender-based abuse and violence online

Facebook’s choosing not to act to mitigate harm, despite knowing that its platform, Instagram, was negatively impacting the mental health of teenage girls is also reflective of its attitude towards abusive and harmful material aimed at women and other gender-diverse people overall on its platforms. As APC’s findings have shown, Facebook and Instagram have not met their human rights responsibilities in the context of increasing violence and abuse against women and people of diverse genders and sexualities. If anything, the onus of safety and protection remains on survivors themselves, thus keeping platforms like Facebook at bay from accountability for their impact on human rights.

It is clear that no structural change is foreseen by Facebook to design algorithms and implement policies that tackle the normalisation, weaponisation and amplification of abusive and harmful behaviour ahead of “retaining” users' attention and time when designing business models that profit off of immediacy, emotional impact and virality.[2] Lack of transparency concerning algorithms and other technical and decision-making processes adds a layer of concern to the opacity in which Facebook operates, since it renders the platform and its applications virtually closed to in-depth research and public scrutiny. The whistleblower’s findings are further proof of what civil society and others have been raising in recent years – the approach of self-regulation by Facebook and other social media companies has not proven sufficient and effective.

Centralisation of internet services

In addition to the revelations, the global outages of Facebook and its subsidiaries, Instagram and WhatsApp, on 4 October and 8 October 2021, and Facebook’s inability to fix the issue over many hours highlight the dangers of centralisation of infrastructure in big companies, the concentration of power in the ownership of multiple popular communications services and the over-reliance on these services by millions of people.

Facebook and its subsidiaries have 2.8 billion users worldwide who use its services for different purposes, from running businesses, social engagement, community outreach and personal communication. The six-hour outage on 4 October led to a $50 billion loss of market value for Facebook, and the losses to businesses and individuals relying on Facebook’s services are yet to be fully understood. An important factor is how embedded Facebook and WhatsApp in particular have become in the everyday lives of many people. Especially in the global South, free services like Facebook and WhatsApp are integral for small businesses and critical for daily communications between families and friends, even more in the context of the COVID-19 pandemic.

This concentration of ownership of multiple services or platforms as well as market share is part of a data-harvesting business model of Big Tech companies, which have a history of buying out potential competitors (such as how Facebook acquired WhatsApp and Instagram) and make their net users grow in consequence. Despite serious questions around antitrust practices being raised, this strategy of Facebook continues unabated. This monopolisation has allowed them to abuse their positions of power in many ways, not only when dictating how businesses, organisations and individuals can engage with them and on their platforms, but by providing them a level of influence offline that can be difficult to counterbalance – for example, when countries are seeking to regulate private sector practices.

The intent to concentrate ownership and buy out competitors also negatively impacts innovation and the creation of viable alternative models of service provision and networking spaces, leading to diminished pluralism and diversity, and dependencies among internet users on these limited services. Facebook’s intention to monopolise the market has also been enabled by governments allowing its Free Basics programme to operate in their countries, providing users unlimited access to the platform without any data charges in violation of net neutrality principles. Responses by different stakeholders to decentralise critical services on the internet cannot wait further.

Way forward

The first step is to ensure that digital-based services and companies realise and adhere to their responsibility to respect the human rights of their users and others impacted by their operations, in line with the UN Guiding Principles on Business and Human Rights. Platforms, especially large ones such as Facebook, ought to prevent prospective harm that their operations could cause and are obliged to remedy harms previously caused. It has become paramount to establish alternative, smart, complementary regulatory mechanisms aimed at ensuring a rights-respecting, people-centred digital ecosystem. Instead, governments have tried to tackle the problems presented by social media companies in the recent past through increasingly authoritarian regulations that are overly broad, vague and intended to control content. These regulations, like those in Pakistan, India and Indonesia, as well as other countries in the global South such as Mexico, not only restrict freedom of expression and a free, open and rights-respecting internet but also result in more opaqueness from companies on their functioning and implementation of policies.

Instead, as APC has posited, regulations must ensure public oversight and accountability from companies through measures that focus on their behaviour, policies and algorithms, rather than on content. This includes regulating the architectural elements of content distribution such as scale, extent and quantity of dissemination. It also includes assessing the effectiveness of measures undertaken to restrict or amplify content against statutory objectives based on protecting individual rights through human rights audits, and mandating complete transparency from platforms on any type of granular information necessary to effectively monitor them. It further includes the imposition of fines or other corrective actions when platforms do not provide that information in a timely manner. The approach must also go beyond just tool-focused solutions and towards addressing root causes, prevention, protection, liberation, well-being and respect of fundamental rights and freedoms where the onus is on tech companies to act with enhanced public transparency and accountability.

With this in mind, it is also imperative to acknowledge that the economic power through concentration of ownership of Facebook and other Big Tech companies has been a significant impediment to effective regulation, as well as a barrier for innovation and diversity. It is therefore time for measures to restrict the concentration of digital service provision and networking spaces by a few companies that have led to internet centralisation. This can be done, for instance, by developing competition and antitrust regulations that make it harder for technology companies to buy out competitors and to compel companies to divest parts of their business that results in the creation of monopolies. The US Congress Judiciary Committee’s recent approval of a report that lays out necessary antitrust measures against Big Tech is a welcome step in this direction. Steps must also be taken to ensure that there are no distortions of competition in digital markets that favour dominant companies by allowing discrimination of traffic of applications, content and services based on commercial criteria. To this end, telecommunications regulatory bodies should issue clear guidelines for internet service providers and operators to protect net neutrality and technological innovation.

Beyond this, states, civil society, technical communities and others must engage on developing a multistakeholder approach to creating an enabling environment that allows for diverse economic models and alternatives to the current social media and service provision platforms. This includes alternative community-driven models for technology development, at the level of both infrastructure and software, which can co-exist and thrive and thus contribute to protection of human rights, limiting the erosion of democratic processes and decentralisation of the internet.

The dangers of inaction is best summed up by Nobel laureate Maria Ressa as she explains, “It is a battle for facts, and the biggest problem we face right now is that the world’s largest distributor of news, technology like Facebook and YouTube, have prioritised the spread of lies laced with anger and hate over facts. If you don’t have facts, you don’t have truth. If you don’t have truth, you can’t have trust. Without any of these three, democracy as we know it is dead.” We ignore this warning at our own peril.

Notes

[1] See, for examples:

Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. https://www.researchgate.net/publication/333950710_Lawless_The_Secret_Rules_That_Govern_Our_Digital_Lives.

Benesch, S. (2020). But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies. Yale Law School. https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1004&context=jregonline.

Barrett, P., Hendrix, J., & Sims, G. (2021). How tech platforms fuel U.S. political polarization and what government can do about it. Brookings. https://www.brookings.edu/blog/techtank/2021/09/27/how-tech-platforms-fuel-u-s-political-polarization-and-what-government-can-do-about-it/.

Velazco, C. (2018). Facebook can’t move fast to fix the things it broke. Engadget. https://www.engadget.com/2018-04-12-facebook-has-no-quick-solutions.html.

[2] https://pen.org/report/no-excuse-for-abuse/

 

« Go back