By Cristiana González and Mohammad Tarakiyee Publisher: APCNewsPublished on
Page last updated on
This article was originally published by poliTICS, an Instituto Nupef publication, on 22 November 2015
Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. – UN Special Rapporteur Frank La Rue, 2011.
We believe it’s possible to sustainably provide free access to basic internet services in a way that enables everyone with a phone to get on the internet and join the knowledge economy while also enabling the industry to continue growing profits and building out this infrastructure. – Facebook, 2013.
The idea that access to information and communication are key rights to enable economic growth and to support human development and the empowerment of marginalised and impoverished people is not a recent one. From the World Summit on the Information Society (WSIS, 2003-2005), and the subsequent discussions in the Internet Governance Forum (IGF), to the May 2011 report by the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, consensus has been built around the idea that access to the internet is an important means to advance human rights.
However, there are several obstacles limiting the equal right to access around the globe. Rural and geographically isolated areas and large parts of developing countries still lack the telecommunications infrastructure on which the internet is made available. Furthermore, there are economic barriers in developing countries tied to the scarcity of infrastructure and bandwidth, making access to the internet not affordable to many impoverished and marginalised people. This is compounded by the lack of public access infrastructure.
Which is why in 2015, four billion people, mostly from developing countries, remain disconnected. These inequalities have been used as justification by Mark Zuckerberg’s project Internet.org, which aims to “connect” two thirds of the world’s population by giving them access to a walled garden of “free” services, which according to him is the right thing to do.
What is Internet.org?
Launched in 2013, Internet.org is an initiative led by Facebook to explore new ways to connect the two thirds of the world that remains offline. The flagship project of the initiative routes traffic through a proxy server for a platform, also called Internet.org, that then ships information to lower-end phones in regions like Asia, Africa and Latin America. Thirteen countries have adopted the platform so far: Zambia, Tanzania, Kenya, Colombia, Ghana, India, the Philippines, Guatemala, Indonesia, Bangladesh, Malawi, Pakistan and Senegal.
Internet.org has attracted many other multinational partners, such as Samsung, Ericsson, MediaTek, Opera Software, Nokia and Qualcomm. It also features a Connectivity Lab that aims to explore new ways to connect people, including drones, satellites, and even lasers. Also interesting is the Innovation Lab, a partnership between Ericsson and Facebook that aims to help developers understand how to serve their applications to low-end devices and in regions where internet access is scarce.
The goal of this project is to bring affordable access to selected services in less developed countries by increasing efficiency and helping develop new business models around the provision of internet access. Mark Zuckerberg continues to maintain that access to part of the internet is better than nothing at all, particularly when billions of people on the planet remain offline. While that statement might seem agreeable at face value, Facebook’s approach to providing its platform in particular is at odds with fair access to the internet.
What influences internet adoption?
There are a number of interrelated factors that influence whether an individual will adopt access to the internet. One major factor is the affordability of access, and that is very often the major obstacle to increasing internet adoption. Other factors include the perceived relevance of the internet to the potential user’s life, which can be broken down into the user’s literacy (particularly digital and media literacy), the availability of locally relevant content, produced in locally spoken languages, and the availability of internet access points, whether public or private.
From a developing country perspective, many people will never enjoy private access to computers or the internet. Public access points such as telecentres, libraries, community centres, clinics and schools must be made available so that all people can have access within easy walking distance of where they live or work. This should be coupled with local, community and national initiatives to promote free or low-cost training opportunities, methodologies and materials related to using the internet for social development.
At this point, it is important to distinguish between barriers to internet adoption in developing countries, such as high cost of access that can prevent those who are aware of and want internet access from acquiring it, and users who are not willing to adopt internet access at any price. The lack of perceived relevance is posed as an explanation for why there is low internet adoption even in countries where there is generally high mobile growth, high penetration and use, and the availability of relatively inexpensive mobile data plans.
It is no coincidence that companies whose business models depend on widespread adoption and large networks of users (and their private data) are now deploying projects to not only offer cheap access to their networks in developing countries, but to also offer access to a limited set of services that may have more perceived relevance to non-adopting users in these countries.
Digital philanthropy or a new colonial data mine?
The Internet.org platform has been criticised for trying to dictate what people see in those developing regions, essentially creating a “two-tier internet”, an “internet of the rich”, or those who are wealthy enough to pay for the unlimited access, and an “internet of the poor”, who cannot afford to make their own choices about what content they get to browse.
This, along with similar “zero-rating” initiatives that allow unlimited access to certain websites over an otherwise metered connection, violate one of the essential principles that have enabled the internet to become an important tool for communication: the net neutrality principle. Although the rule for net neutrality may vary from country to country, in general terms it is a network architectural principle that internet service providers should treat all data traffic on their networks equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.
As zero-rating refers to a number of commercial strategies developed by mobile service providers in partnership with application providers and aimed at providing free data traffic for a particular application or specific service, such a price discrimination has also been seen as a kind of net neutrality violation. More empirical research is necessary to completely understand its impacts, but the growing literature in this field points out that this business model affects users and their access to the internet, with some studies showing that in countries where zero-rated services were adopted, 3G and 4G prices have risen. The literature also presents concrete risks of government filtering of zero-rated apps in countries where censorship is a common practice, and shows how zero-rating may increase social exclusion and decrease people’s interest in exploring other services and applications (walled gardens). Some research explores the economic effects of these initiatives on competition and adverse consequences for technological development in developing countries.
In this context, these approaches that might seem like philanthropic initiatives in the short run have the potential of destroying the fabric of a rich, pluralistic internet and discouraging public policies and regulation that could effectively increase internet penetration rates.
A simple look at the Internet.org website shows a severe case of the white man’s burden, an attempt to justify cultural and economic imperialism as a noble act, by appropriating the stories of local inventors, entrepreneurs and farmers, or whatever Internet.org sees as positive examples in developing countries, and using these to justify the need for a walled garden approach to increasing access to their platform.
In reality, the personal data of these willing and unwilling adopters of Internet.org will be mined and used by Facebook and the other closed platforms offered by Internet.org for their own profit. On the open internet, people have a choice to use platforms such as Facebook or other more privacy-oriented platforms, but this choice is one that the users of Internet.org will not be able to afford.
Internet.org as a platform takes the agency away from the inventors, entrepeneurs and farmers of developing countries by trying to match them with the content that Internet.org thinks is useful to them. According to them, a farmer needs weather data. On the open internet, the farmer chooses whether to access weather data, or leaked files on his government’s agricultural treaties, for example. Furthermore, the farmer is not obliged to choose a weather service that will mine his data and use it to direct ads to him.
Another problem of associating and restricting access to a service like Facebook is that it has been proven to be a manipulative platform. It is not the only one, but the fact that it is a leading social network among users and marketers, and that it is migrating to a content provider model, shows how it might have serious consequences on human rights, especially freedom of expression. Statistics on leading social networks worldwide from March 2015, ranked by number of active accounts, shows Facebook as market leader surpassing one billion registered accounts, representing almost double the accounts on Chinese platform QQ, which occupied second place. And the blue social network is not only famous among end-users: in a 2014 survey it was also elected by 54% marketers worldwide as the most important marketing platform. But instead of interpreting these numbers as proof of success, they must be analysed in a broader perspective that includes the debate of whether people are being trapped by a biased and discriminatory algorithm bubble cycle.
Facebook is not only the favourite of marketers. It has sent unprecedented levels of traffic to publishers across the internet in recent months, a dramatic and unexpected increase affecting a large range of sites serving a wide variety of content. Traffic from Facebook referrals to partner network sites were up 69% from August to October 2014, only because the social network company broadly shifted its algorithms to create formidable new traffic streams. Now organic stories that people did not scroll down far enough to see can reappear near the top of their news feed if the stories are still getting lots of likes and comments from others. But if most of the company profit comes from advertisements inside its platform, it is clear that they prefer to keep individuals for as long a time as possible on their social network.
A little after this algorithm change, Facebook held talks with at least half a dozen media companies about hosting their content inside Facebook rather than making users tap a link to go to an external site. Posting journalism directly to Facebook sounded like a great idea for those publishers who do it early. They will be able to enjoy a set of small privileges that will express themselves in major ways: their stories will load faster than links to outside sites; their posts will merge more seamlessly into the addictive news feed. Engagement, views, sharing, time spent will increase. Borrowing people’s time and making them spend it inside the platform is at the centre of their strategy, and is reminiscent of early 90s platforms, like Altavista, when companies used to make users stay on their page to get more traffic.
Due to regulation failures in many parts of the world, especially related to the protection of personal data, companies like Facebook have grown up based on a very bizarre business model: behavioural advertisements. The basic code at the heart of the internet now is pretty simple. The new generation of internet filters look at the things users seem to like — the actual things they have done, or the things other people like the user like — and try to extrapolate. They are prediction engines, constantly creating and refining a theory of who people are and what they do and want next. Together, these engines create a unique universe of information for each user — what Eli Pariser called a filter bubble — which fundamentally alters the way people encounter ideas and information.
Some concrete examples can contextualise the risks that such a vicious cycle can bring for the most basic rights such as freedom of expression and access to information.
In August 2014, in Ferguson (Missouri, USA), police officers and later the National Guard tried to enforce order on a town demanding justice for Michael Brown, the 18-year-old man gunned down by a police officer. In the middle of street protests, with many people commenting online what was happening, Facebook timelines were populated with the so-called ice bucket challenge, the act of dumping a bucket of ice water on someone’s head, including celebrities, to promote awareness of the disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s Disease). While there had been far more stories published about Ferguson over the weeks, these stories were far less popular on Facebook than ice bucket content.
The implications of this example of disconnection with reality were huge for readers and publishers, considering Facebook’s recent emergence as a major traffic referrer. Because it uses a ranking algorithm that filters what people see in their news feeds, relying too heavily on Facebook’s algorithmic content streams resulted in de facto censorship. The Facebook principle that “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” applied to the Ferguson episode is the kind of manipulation that people can at least notice. At this point, what must be a source of preoccupation are the small manipulations that people may not be aware of.
The company itself has been testing the idea of filters and bubbles. In 2014 there was a tremendous controversy about Facebook’s manipulation of the news feed for research. The experiment manipulated the extent to which people (689,003) were exposed to emotional expressions in their news feed. They tested whether exposure to emotions led people to change their own posting behaviours, in particular whether exposure to emotional content led people to post content that was consistent with the exposure, thereby manipulating feeds and testing emotions by verifying whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion. Consumer and privacy experts said the experiment set a dangerous precedent for how far corporations can go in order to compel certain consumer behaviours.
At the beginning of 2015, in an attempt to answer the question of whether the social network’s news feed selectively serves up ideologically charged news while filtering out content from opposite political camps, Facebook again conducted its own study and, not surprisingly, in-house social scientists found that polarisation indeed happens. According to their research, liberals and conservatives in the United States may rarely learn about issues that concern the other side simply because those issues never make it into their news feeds. Although its methodology and references were very criticised, the obvious conclusion of what was called the “Not our fault study” was that over time, this could cause political polarisation, because people are not exposed to topics and ideas from the opposite camps.
The case for a universal data allowance
Instead of reducing internet access to players from the marketplace, there are many alternative ways to expand infrastructure, like policies for infrastructure sharing and use of TV white spaces, always combined with the deployment of national broadband plans and the development of community-owned networks. In addition to all those alternatives, an interesting idea emerged when critics of Internet.org argued that offering time-limited access to the entire web would represent a more progressive strategy.
What if it would be possible to implement a redistributive and innovative policy applied to the information economy? What if we could provide an unconditional universal data allowance (UDA) to all people at a level sufficient for browsing and using different kinds of applications on mobile phones? The precise suggestion is that the government should pay a fixed monthly amount of data, the same for everyone, to each person. This free data for a limited time is not to be conditional on any behaviour or characteristic of the recipient, other than being a member of society.
By providing access, without zero-rated services, this would promote concrete steps to achieve social justice by increasing freedom, including the freedom of choice, improving women’s lives for example, without giving too much power to private companies and their platforms. The universal data allowance is only meant as a baseline of access to which more data can be added. Unlike Internet.org, it is therefore not definitionally tied to some notion of “basic needs” on the internet, but instead aimed to provide true universal access, giving agency to and empowering the people who use it.
Obviously a very basic UDA would be feasible in economic, even budgetary, terms, especially because it would at least partially replace some current means of data caps. In technical terms, there is no difference between what data telecommunication companies now budget for “zero-rated” services and for other data running on the network; the major limiting factor is bandwidth. A UDA would basically remove the arbitrary restrictions limiting zero-rating to certain privileged platforms.
But the idea of providing people access as part of a social policy at least raises a deeper debate on fundamental questions about the goals of social and economic arrangements and how social policy can help create the kind of society we want to live in — about how to correct for poverty amidst plenty, and how to ensure that everyone gets a fair share of the benefits of social cooperation. Such a policy would work towards eliminating the economic gap to universal internet access, as well as the artificial scarcity used to make zero-rated platforms appear socially useful.
It is more than getting individual users connected, and restricting the idea of access to something that will attract potential consumers for the information economy market. It is about promoting real freedom for all by providing the material resources that people need to pursue their aims in the digital realm. At the same time, it would help to solve the policy dilemmas of access and serve ideals associated with open and free internet advocates and other social movements, since it is tied to the notion of increasing economic growth and development.