Intermediary liability: Internet policies that affect Africans

By MN Publisher: APCNews     LAGOS,

Many governments in Africa are establishing regulations to further control the flow of information on the internet. This trend includes holding intermediaries liable for content circulated by their users on their platforms and networks. APCNews talked to researcher Nicolo Zingales to find out more about the issue in the African context.

What aspects of intermediary liability do you think are relevant for internet rights activists?

Generally speaking, intermediary liability can affect anyone who is engaged in communication on the internet. This is because the mere existence of potential liability for intermediaries for “carrying the message” creates a system of adverse incentives for free speech, which can manifest itself in two forms. One is through removal requests that can be submitted to the intermediary after content has been uploaded: the intermediary’s awareness of the possibility of being found liable for not complying with such requests tends to lead to semi-automatic obedience (or at least, to errors in favour of censorship rather than in favour of free expression). This is normally obtained through a take-down procedure, where the legal system provides for one.

The second form of manifestation of this problem is that the intermediary is from the outset incentivised to prevent the existence or the uploading of certain content which can then subject the intermediary to liability, if there is even a small possibility that this content is found to be illegal. The intermediary can do this by adopting restrictive terms of service that do not allow for “dangerous” content from this perspective. This would often include “edgy” content, i.e. content not in line with the mainstream view or with a broad and far-reaching understanding of the scope of an individual’s right to privacy, dignity or intellectual property. On this last point, it should be noted that criticism, parody and transformative use of someone else’s content are an integral part of freedom of expression ‒ as well as of copyright law, which is often invoked to justify free speech restrictions.

I think that the problem of “chilling effects” is particularly relevant in the case of activists, who may argue for change in a manner that is controversial ‒ both for governments and for certain classes of private individuals ‒ and therefore place significant reliance on their ability to work around the edges of the rights of others to restrict their speech. So, a limited and reasoned regime of intermediary liability is very important in general, and for activists even more important to enable free speech.

An additional but related point concerns the responsibility of intermediaries with regard to the use of the personal identifying information of their users, which is not perceived as an issue of intermediary liability in its traditional and narrow sense (i.e. of indirect liability for content generated by users) but can still be linked to the sphere of competence of intermediaries and therefore to intermediary liability broadly understood. In particular, the liability of intermediaries for disclosing personal information to governments or private parties in violation of the applicable legal rules would be a direct infringement. Obviously, the rules governing this kind of behaviour have a downstream effect on users’ free speech, as well as their willingness to engage in certain kinds of transactions (including the use of internet services in a particular country altogether) which can be crucial for activists to reach their audience.

Can you give me examples of internet intermediaries that people interact with on a daily basis, globally and in Africa?

In the background paper I mentioned that the role of intermediaries today is ubiquitous because it is intrinsic to the essence of global networked communication, where there is intermediation at several layers of the communication chain. More specifically, however, I can think of three among the several categories of intermediaries I mentioned in the paper that are most frequently called on to make crucial judgements for user-generated content. The first is internet service providers (ISPs) ‒ in a narrow sense, internet connectivity providers. The second is social networks and search services, although it should be noted that most of the companies offering these services are based outside Africa (often in the US) and therefore there is a problem of enforcement jurisdiction as well as a problem of rules and standards that may contrast with those that these companies have to comply with in their country of origin. The third is news portals, blogs and other websites allowing for user posting.

In your article, you made reference to the term “safe harbour” in legislative frameworks. Can you please explain what the term means? Can you give us some examples for African countries? And why are they important?

Basically, safe harbour is a term that refers to an area of legality that companies can rely on so as to be sheltered from potential liability. It is also called “immunity”, although it never goes as far as entitling a company to complete immunity ‒ its benefits being subordinated to the fulfilment of specific conditions.

So far, the only country in which I am aware of a safe harbour provision in Africa is South Africa. In this respect, we have to recognise that bringing the concept of safe harbours into legislation constitutes a good practice and a leading example in the region, which follows the inspiration of older and in some aspects more advanced systems such as those in the US and EU.

However, it also needs to be recognised that the safe harbour suffers from a problem, in that it imposes stringent conditions by requiring companies to be part of a government-approved industry representative body and to adopt and implement the corresponding code of conduct. The conditions to become an industry representative body are quite lengthy and burdensome, and that is arguably why so far we have only one industry representative body in South Africa, which is the Internet Service Providers’ Association (ISPA). Basically, at present an intermediary that wants to benefit from safe harbour in South Africa needs to register with this association, which can be a problem or not make much sense for companies who have a completely different business model from ISPs. Furthermore, small intermediaries like bloggers may feel that they do not have the capacity to afford the annual fee required by the ISPA, with the consequence of either operating at threat of indirect liability or being deterred from starting or continuing their activity altogether.

Companies need legal certainty, and also a preserved area which enables the creation of platforms of free speech; platforms that allow people to express themselves freely, even if under certain limiting (and proportionate) conditions. If you don’t have anything, judges and (what is most worrying) governments can interpret the law in a way that deters the willingness of the platform to engage in that kind of speech-generating activity.

What is the implication of liability for an internet intermediary, and how does it affect the user’s internet experience?

The existence of potential liability for an intermediary makes it likely that the intermediary will adopt a more prudent approach to judging what kind of content is allowed on its platform. An example of this is seen in Facebook’s censorship of material that is perceived as offensive or indecent, even though this would not be perceived as such by the majority of its users (a classic example is that of photos of breastfeeding mothers). This is generally done in order to prevent the risk of indirect liability.

Can you give us examples of best and worst practices in Africa both in terms of legislation and de facto practices?

I mentioned the issue of safe harbour in South Africa; I think that’s a remarkably good practice because it’s very important to have a safe harbour in place. Most of the African countries we learned about in our internet intermediary workshop do not have anything similar.

So, this is a very good practice, but then again there are drawbacks such as those of the conditions to fulfil for the enjoyment of the safe harbour, as well as the fact that there is currently no involvement throughout the adjudication process by the user who originally posted the content that the intermediary has been requested to remove.

A second good practice is the code of conduct developed by the ISPA, which professes the need to respect fundamental rights such as privacy and freedom of expression ‒ though there is still room for improvement. Finally, we learned that the process of law- and policy-making in Kenya on ICT over the last few years has significantly improved in terms of awareness raising and consultation, and has led to norms having wide multi-stakeholder acceptance. Although this has not resulted in specific legislation for intermediary liability, it can still be used as a model for the definition of intermediary liability regimes in Africa.

On the negative side, in many countries in Africa there is a requirement for intermediaries (including mobile operators and cybercafés) to register their users, usually by use of ID cards. We have seen that this has led to some cases of non-compliance with the regulation because it is perceived as unfair. If the regulators were to enforce that kind of rule strictly, we would have few or no cybercafés.

Another problem we learned about during the workshop is that in African countries regulators often enjoy broad administrative power, being able to intervene swiftly through the administrative process, for example, changing the conditions of the licences. This enables governments and regulators to basically order intermediaries to do whatever they want (or at least, to operate with insufficient limits and safeguards for constitutional rights), as appears to be the case in Nigeria and Uganda.

In general, I think that a significant problem in Africa is that there is a lot of “cherry picking”, in the sense that a policy which is good practice somewhere is taken and removed from its context (which might be Europe or the US) and then is replicated in Africa without considering the legal and social context that surrounds it and all the complications that might arise. The process of making policies needs to be more methodologically rigorous, considering all these problems.

Why have the responses of internet intermediaries to technology-related forms of violence against women been so inadequate?

On this question I must say that I’m no expert, I have no knowledge about whether the responses have been totally inadequate, but I can speak to what we heard at the workshop. We had a presentation from someone working on women rights who complained about the lack of effective remedies for violence against women in the virtual world.

This brings up two issues. One is the fact that defamatory, insulting or otherwise heinous speech in the virtual world can be as harmful as physical violence in the real world. An insult in this context goes with a multiplier effect, because of the extended number of people that can observe it. That is why intermediaries have a very important role in addressing those kinds of situations by identifying and removing these sorts of comments immediately.

Another issue relates to the fact that there is often no prompt and effective system of remedy against this kind of situation. To be sure, there is usually – if not always – a possibility for users to notify the intermediary that this kind of content is there, but the intermediary often fails to react quickly and responsibly. For this reason, we discussed in the workshop the role of a “responsible approach” by intermediaries, which goes beyond what they are strictly obliged to do according to the applicable law.

In this respect, I can say I have also had direct experience of a slow reaction by an intermediary (Facebook) in addressing queries or complaints. And I acknowledge that particularly in the case of offensive speech, there is a need to do something to ensure a quick and effective resolution: the longer the offending speech remains online, the greater the potential harm that the victim suffers. However, I also recognise that it would be difficult to require intermediaries to filter all the content in order to identify this kind of speech ex ante, and block it on their platform. I think it would be tantamount to expecting content monitoring, which is against one of the basic principles upon which immunity from internet intermediary liability is based around the world.

In this particular area, I think that promptness is crucial. And perhaps it would be a reasonable solution to establish a presumption of validity of the claim or some other kind of mechanism for affected parties to act immediately without giving the content uploader the opportunity to reply in the first place. But then, if this is allowed, there should be an appeal procedure that allows for a subsequent stage for the content uploader to argue his case, so that it remains for a court to have the final word on the legality of the content at issue.



« Go back