Pasar al contenido principal

There are many conferences these days on different aspects of the digital society. They fill the diaries of digital insiders. Their proliferation’s easier to handle for those with deepest pockets – bigger countries and global corporations. Other stakeholders have to pick and choose the ones that matter most to them.

UNESCO held a conference last week on what it called ‘Internet for Trust’ – more specifically to discuss draft guidelines for regulating digital platforms in order “to improve the reliability of information and protect freedom of expression and human rights.”

This was a conference that mattered. Something on why, on what transpired, and on the issues that need to be addressed in moving forward.

The central issue

The central issue for the digital society, increasingly, is the direction that it’s taking, how it shapes human society, its relation to the ‘common good’.

We’re less naïve about this now than 20 years ago. Digitalisation enables people to do things more effectively, whether they are saints or sinners, democrats or dictators, service providers or criminals, investigative journalists or propagandists. The technology and services that enabled (or facilitated) the Arab Spring also enable (or facilitate) racial hatred and gender-based violence today.

Opinion’s shifted as digitalisation’s grown, from focusing on optimism that’s based on benefits towards anxiety that’s based on risks. A key part of that, in terms of governance, has been growing acceptance of the principle of regulation.

Left to their own devices, runs this argument, powerful players – governments, corporations, others – will advance their own interests. This is as true in the digital world as in other economic sectors (energy production, chemicals, pharmaceuticals, for instance) and also in organisations that are criminal.

Setting boundaries around what powerful actors do is, therefore, important; the challenge concerns how to do this without adversely affecting rights and other aspects of the ‘common good’. Hence UNESCO’s role, as a UN agency concerned with setting standards that are rights-respecting, particularly (but not exclusively) where expression is concerned.

The power of platforms

The growing power of online platforms matters here.

They differ, obviously, in type: some are focused, for instance, on enabling commerce; others on exchanging content. For many people, it’s the latter that are more concerning.

  • The most powerful platforms, it is felt, have become primary influencers of opinion for many users, and they’re open to manipulation. By some users, including those who aim to subvert democratic or national institutions, promote racial hatred or misogyny, or violate the rights of others. Potentially, also, by their owners, who have their own political and economic axes that they like to grind.

  • The algorithms by which platforms recommend content that encourages users to stay engaged (so consuming advertising and increasing revenue) are widely felt to favour more extreme than moderate content because that’s what generates and maintains more engagement. In any case those algorithms are opaque, which makes them open to manipulation. We users aren’t determining the content that we see; that’s being determined (curated?) for us by programmes that show us what they calculate we'd like to see (or that their human/government/corporate managements wish us to see).

The power that platforms exercise doesn't just concern the information ecosystem. It’s also economic, and political. It doesn’t just affect freedom of expression but also other rights, including freedoms from abuse and harassment. And increasingly platforms are shaping what’s sometimes called the Overton window: the range of ideas or behaviours thought normal or acceptable within a given community or society.


Debates around regulating information and communications media are complex. They often focus, in international gatherings about digitalisation, round rights, particularly freedom of expression - but rights are also complex.

The central principle concerning freedom of expression in the international rights regime is Article 19 of the International Covenant of Civil and Political Rights. This requires governments to protect freedom of expression but also sets out limits to that freedom.

Some of these are concerned with “national security, … public order, … public health or morals”, and limiting these has been a primary concern of human rights defenders. But they also permit limitations that are “necessary for respect of the rights … of others”. These lie at the root of debates around regulating online hate speech and harassment, including that intended to silence others’ freedom of expression through intimidation.

Governments are also obliged, by Article 20 of the Covenant, to prohibit “advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence”.

Expression’s complicated, too, by power structures and questions of plurality: who controls the media channels that are available (in this case, platforms); who has access to them and how equal is that access; how many different channels are available; in what languages, with what political or cultural diversity; and so on.

And regulation’s not just about rights, but also economics (the ways in which powerful players (in this case platforms) can use economic power to advance their own commercial interests) and politics (the ways in which media owners can leverage media power to promote political goals and allies).

UNESCO’s challenge

The complexities of regulating platforms have been bubbling up for years in digital discourse and are becoming more intense as platforms have become more powerful influences over human behaviour and more overt vectors for behaviour that is seen as harmful to the ‘common good’.

UNESCO’s attempt to establish guidelines that are built around established international agreements, including human rights agreements, is an important intervention. The aim’s to try and build some kind of order, some set of norms, across a media environment that’s changing rapidly.

Part of the concern here is historic. The digital environment’s developed without the same degree of regulatory governance that’s shaped telecommunications, broadcasting and print media, or that governs other economic sectors (including competition policy).

Some digital insiders cherish this because they see it as facilitating innovation. Increasingly, though, it’s seen by many as having also fostered problems, including the spread of mis/disinformation and surveillance by both governments and platforms. Regulation itself has also driven innovation in other sectors, not least broadcasting and telecoms, by facilitating common technical standards, for instance, enabling and maintaining competition and requiring inclusion and plurality.

The absence of standards in an economic sector doesn’t mean that standards don’t emerge: just that those standards that do so are shaped by the market (and the preferences of powerful actors), rather than by political decisions, regulatory norms or discourse around what is the ‘common good’.

In practice, advocates of guidelines such as those UNESCO’s now proposing argue, this means that the absence of standards can give more power to authoritarian governments and corporations to shape digital futures than would the establishment of more formal standards. Guidelines and regulation are, in their view, required to sustain international norms including human rights.

Urgency here is implied by two things: current concern about online abuses and disinformation’s potential impact on democratic institutions; and the imminence of major new developments in media, such as AI chatbots like ChatGPT which could radically reshape the digital environment (does text they generate, for instance, benefit from ‘human’ rights protections?).

The outcome of the conference

Last month’s conference was an important staging post within debates around these themes.

UNESCO was emphatic about the need and urgency of building a rights-respecting information ecosystem. While its draft guidelines were seen as progress towards this by most participants, however, there was less consensus over detail.

Who should regulate, and who should regulate the regulator, was one key question (as it often is). Self-regulation – platforms deciding what should and shouldn’t be online – has been platforms’ own preferred approach, but is increasingly seen as unsuccessful. Many want to see platforms be much more accountable – to users and society in general. Many fear, however, the possibility that this means government control, partlcularly by authoritarian regimes.

Three themes emerged here, reminiscent (to old timers like myself) of discussions about regulating telecoms back in the 1980s:

  • the need for regulators to be independent from both governments and business interests;

  • the desirability of building regulation around principles that will survive changes in technology and services rather than depending on the technology and services that are currently available; and

  • the idea that regulation should focus on requiring/forcing platforms to implement its principles across the board without necessitating regulators to intervene directly (except where regulated bodies fail to do so).

There was, I felt, general (but not universal) acceptance of the principle of regulatory guidelines at UNESCO’s conference – a good deal more, for certain, than would have been the case five years ago.

While some supported the majority of the draft guidelines, and many supported some or most of them, many also disagreed with some and/or sought to extend consultation processes and take more views from more potential stakeholders. While many recognised the need for urgency, a few at least might be thought to have preferred prevarication.

And there was also a problem of participation. UNESCO guidelines will eventually require agreement by its (and therefore the UN’s) member states. Not every government was represented, and a number of powerful governments that would have a major role in implementing international agreement made little or no contribution.  And only two of the digital world’s main platforms participated substantially in a major international discourse on their own accountability. That’s an obvious problem.

Six thoughts for establishing principles

So the debate around draft guidelines will continue. I’ve no space here to go into the issues in much detail, but will end by raising six themes I think ought to be crucial to discussion.

First, whatever guidelines are eventually agreed, they should have lasting resonance: they should be built around principles rather than current circumstances, so that they address the future rather than the present only. This is difficult because the future is uncertain and is changing rapidly (no one predicted that these platforms would become so powerful 20 years ago), but it’s essential if guidelines are not to need frequent revision.

Second, these guidelines must have broad international relevance. Consistency’s required: principles that apply to what are often global platforms need to be applied by them across their networks. But scope for variance is also needed. As many delegates observed, different countries have different cultural histories, social norms and economic circumstances. These can’t be ignored if regulatory principles are to be considered valid in different jurisdictions.

Third, principles require enforceability. Most platforms are and will offer their services across the globe, but will be based in one of few specific countries whose governments are more than likely to support their global interests. The biggest will have greater power than many governments of countries where they offer services. How can regulators in small developing countries exert authority over powerful platforms that are based elsewhere, think those countries marginal to their commercial interests, and have no meaningful local management? And how can regulators in those countries build the skills they need to do so as effectively as possible?

Fourth, regulation should be understood as multifaceted. It’s not just concerned – as most discussion at this conference was – with issues of expression (including its abuse). Regulation’s also concerned with wider goals of media governance (such as plurality), with economic issues (abuse of market dominance, taxation), environmental factors, the requirements of national development, conflict avoidance and conflict mitigation, criminal law and legal processes, inclusion, equality and equity. All of these matter, and the way they're handled need to be consistent across different government departments/agencies.

Fifth, complex issues aren’t susceptible to binaries or purely technological solutions. Attention should be paid to impacts when balancing the rights of different parties (‘protecting the rights of others’) and the interactions between rights and rights agreements (not just the covenant on civil and political rights, for instance, but also agreements on economic, social and cultural rights and those concerned with gender and with children). When technologies or services enable both those who seek to use them for the ‘common good’ (e.g. human rights defenders) and those who seek to use them to its detriment (e.g., criminals), as most do, the answer’s not to choose one or the other but to find ways that deal with both. This requires focusing on impacts when designing principles.

And sixth, whatever is agreed here will require broad consensus. It requires the involvement of diverse stakeholders – which are much more complex in reality than the four or five stakeholder groups typically identified in international fora, and should be recognised as such. It also requires consensus across divergent geopolities – which will be very hard to find within today’s environment.

Developing international principles is always difficult. No one’s going to find the outcome from this process perfect. UNESCO’s conference focused attention and featured much more nuanced discussion than I’ve found on this theme in comparable fora. That’s an important step forward. Those concerned about digital futures should engage with it and watch for how it interacts with other discussions such as those around the UN’s intended Global Digital Compact.


Image: Day 1 _ Internet for Trust by UNESCO Headquarters Paris via Flickr (CC BY-NC-ND 2.0)

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.