Inside the Digital Society: Where’s the (common) good in it?

It’s several weeks since my last blog, on the fears and harms aroused by large learning models and what’s been called ‘generative AI’. And haven’t things moved on in those few weeks!

Government approaches

The range of novel applications of AI’s expanded rapidly, with many (in the AI industry and elsewhere) lauding their potential to improve productivity and make life easier, while others sound more fearful for the future.

Governments have been trying to catch up with technological developments, balance what they see as potential benefits with what might be potential risks.

Some are more positive than others, especially those that see potential economic gain for their own countries’ digital businesses. Britain’s government, for instance, has promised a ‘pro-innovation approach’ intended to create an ‘AI superpower’ with ‘the right environment to harness the benefits … and remain at the forefront of technological developments.’

It recognises risks but forefronts them less than the European Union, which ‘aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules.’

The innovators intervene

But there’s another set of interventions in the past few weeks that’s been generating news: from some of the scientists who’ve spearheaded the research that’s now being applied and looks like it’s to be transformative. Those interventions of concern are best summed up in this statement signed by many, including key figures who have worked in Google, Deep Mind and OpenAI:

“Mitigating the risk of extinction from AI,” they say, “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

There’s a lot in that to be unpacked: too much for one blog only. I’ll concentrate on four things: two broad ideas - the problem of uncertainty; the concept of the common good – and two that are more practical - the implications for regulation; and the challenges of global governance.

The problem of uncertainty

The central problem here is that we know not where we’re going – and there are very different views of how concerned that ought to make us. And about how far ahead we need to look.

For decades science fiction has alarmed us with the thought the cyber-somethings we create will become much smarter than ourselves and take control. Perhaps (mostly) benign controllers (as in Iain Banks’ Culture novels); perhaps malign, intent maybe on our destruction (try Battlestar Galactica).

That anxiety’s about what’s called ‘the singularity’ – ‘a hypothetical future where technology growth is out of control and irreversible’. An existential fear of our displacement. It’s been pooh-poohed as a possibility by AI scientists for years: something so far off that it should not concern us. The Statement on AI Risk above suggests, however, that some AI scientists now think the risk of our displacement’s closer: something we’re not wrong to fret about.

We’ve lived with other existential risks for some time now (nuclear war, pandemics, climate change). But a focus on such existential risks poses another risk that’s more immediate. It’s the process of transition from a human-centred world to one that is more cyber-centred that should concern us now, for two reasons:

  • Firstly, because that growing competence of AI ways of doing things will be used by those with power over them to their advantage rather than the common good – for instance, to save money rather than improving services, or to lock end-users into their products and services.  (Some argue that, when AI scientists warn us of existential threats, they're obfuscating these more imminent concerns.)

  • And secondly, because the organisational and behavioural changes, and the rules and norms that we establish to handle what is happening today (transition), will tend to define those that follow next (in post-transition).

The common good

There’s increasing talk today of how new technologies should advance that uncertain thing, the common good.

What’s meant by that is differently defined by different folks in different places, different social groups, etc. But a stab was taken in our context by the World Summit on the Information Society two decades ago – ‘a people-centred, inclusive and development-oriented Information Society’ – and another, more generally and latterly, by the UN Secretary-General – ‘global solidarity’ with special attention, among other things, to ‘the triple crisis of climate disruption, biodiversity loss and pollution.’

Some sort of common understanding might be brought together here that’s based on existing international agreements – the human rights covenants, for instance, the sustainable development goals, conflict management and poverty reduction; commitments to gender equity, inclusion and diversity. In the digital context, very broad agreements (such as those in the Geneva Declaration of Principles) and some common goals concerning digital access, for instance, or cybersecurity.

These remain quite thin, however, especially where enforcement is concerned. Data privacy is widely praised in principle but in practice widely violated by those whose interests it lies in to do so. The potential use of digital resources to benefit society is lauded in international documents, but with less attention paid to the limits of those benefits, the inequalities within them, and (until recently) the equally significant potential use for harm.

The common good, above all, is not about digital development but about the development of society in general, which is impacted by digitalisation but should not (surely) be determined by it. Within the digital community, digitalisation oft seems to be regarded as an end in itself, assumed to have benign (at least net) outcomes. For those concerned with health and education, equality and rights, economy and culture, however, digitalisation is a means rather than an end, and one that can (and can be used) to help achieve or undermine their goals.

There’s a related disharmony between digital advocacy and broader global goals. Much digital advocacy has emphasised the individual (personal empowerment and individual rights); the common good, by definition, is about collective outcomes (economic and social development and rights). These are compatible but different.

Regulation

And so, to regulation: to the rules and norms that may or may not guide the coming waves of digitalisation.

Innovation’s generally considered positive (though in practice it’s not always so, and certainly not so for all). It’s identified in many minds with ‘progress’, with the ‘evolution’ of technology, with improvements in the quality of life and the capacities of individuals and societies. And so has innovation often been – though downsides have generally come to light in time: we’re dealing now, for instance, with those arising from the industrial revolution (climate change) and the wonders worked for us by plastics.

Innovation can, of course, be regulated. That’s been the norm for decades in industries that are thought to be high risk – water and energy, for instance; nuclear power; pharmaceuticals – or to have major social impacts. Most industries, more recently, in many countries, have been required or encouraged to think about the impacts of the products and services from which they profit: impacts on the environment, for instance, or on the lives of children and other vulnerable groups.

This precautionary principle’s been opposed, to date, by most of those involved in digital innovation, who’ve argued that the absence of prior regulation – ‘permissionless innovation’ – enables good things to emerge (and market leadership to be established) that might otherwise have been prevented.

It’s an argument that, obviously, cuts both ways. Some of the most effective innovators have been criminal. Technologies that can transform for good can also transform for harm. What’s crucial is the relationship between machines and humans: the purposes for which people develop new technologies, the ways in which they aim to use them, and the ways in which they will be used (by those intended users and by others).

The Statement on AI Risk that’s recently been made by AI scientists, and other recent interventions, challenge the presumption that innovation should be unconstrained. ‘Mitigating risk’, as advocated there, requires prior analysis of what risks might pertain and implies constraints on innovation that are essentially precautionary.

Regulation in other sectors has often been considered beneficial, not just in mitigating risk or directing industry towards the public good, but also in focusing innovation and facilitating competition (as through standard-setting processes, for instance).

Three years ago in this blog I made the case for ‘responsible innovation’ in the context of environmental impact (and, by implication, impacts on human rights and gender equity) – ‘built round principles that include a care for consequences rather than giving free rein to whatever’s new.’ I’m glad to see the phrase ‘responsible innovation’ acquiring resonance in current debates around AI.

Governance

All of which raises questions about governance. Advances in AI, in the near and medium term, will present challenges of governance – both digital and general – that are generationally different from what we have today; and do so much more quickly than new technologies have done in former times.

Decisions made (or not made) in and for the near and medium term will have lasting consequences for what’s possible in the longer term, when those challenges will be significantly greater. And not making those decisions is itself decision-making.

Much of the discussion about generative AI today is about how (or whether: see above) it should be governed. There’s a shift underway from unconstrained innovation and market fundamentalism towards a more precautionary approach – in the public mood, in many governments and in parts of the digital community itself.

This is as much to do with fears of shorter-term impacts (on employment, for instance, or the quality of the information ecosystem, on surveillance and the loss of data privacy) as it is to do with the long-term existential risks identified in the Statement of the scientists.

New things generally require new approaches to their governance. Old principles may still apply, but old modalities and institutions aren’t likely to enable the same desired outcomes when technological potential and social behaviour are greatly changed. A rule designed for muskets in the eighteenth-century US is proving far from useful in dealing with assault rifles in Chicago now.

The same’s true here. Autonomous vehicles can’t be regulated in the same way as those driven by chauffeurs, or cybercurrencies managed just like sterling or the euro. A new digital era requires new modes and institutions for its governance, in the same way that the internet required new modes and institutions from those that had served well in telecoms.

If we’re to maximise the gains and mitigate the risks arising from AI, we’re going to need modes and institutions designed for the future rather than the present (or in some cases past). Technological innovation requires institutional innovation. Strategic thinking’s needed about what that means, not conservatism.

Five recognitions

I’ll end by suggesting five recognitions that requires.

First, recognition on the part of all stakeholders that the stakes are higher now: that the changes new AI will bring about are much greater than we’ve been anticipating; that the uncertainties and risks, as well as opportunities, are higher; and that this requires something more than ‘business as usual’.

Second, recognition that the current institutional framework’s insufficient. Governance of future digitalisation can’t be managed through existing mechanisms alone. This new technology reaches far beyond the internet, into the management of every other sector, into the relationship between government, business and the citizen, into the future governance of global trade and geopolitics, and into the relationships between developed and developing countries.

Third, recognition that this requires wide-ranging and, yes, innovative thought. Reshaping public policy on sustainability began with the Brundtland Commission in the 1980s. It was its ability to think outside the box that enabled the 1992 Earth Summit to establish principles for sustainability that have since enabled deeper (if still insufficient) global action on sustainability and climate change. We shouldn’t trap that thinking in existing frameworks but experiment and welcome more diversity.

Fourth, recognition that this is not about technology but about its impact. If the common good– or development that’s ‘human-centred, inclusive and development-oriented’ – is the objective, then the lead should come from outcomes not from inputs: from what the ‘common good’ might represent rather than from what technology is capable of doing, from the economic, social and cultural interests of tomorrow rather than the vested interests of today. If the future is more digital, it will only work for people if its governance is determined through dialogue between digital and non-digital expertise, and the understanding that digitalisation is merely the means and not the end.

And fifth, recognition that we’re not right now in the best place to address this. The UN has a process underway to agree on a Global Digital Compact, but already the frames of reference for that are starting to look outdated (not the UN’s fault but a result of the pace of techno change). It’s easier to reach agreement at the best of times about rhetoric than implementation (see the texts of many international agreements, and compare them with reality), but today’s geopolitics are particularly challenging in that regard.

The likelihood is that different models will emerge, and the biggest challenge will be how to make them work cohesively. But that will be more difficult if we don’t explore the options now.

 

Image: Machine Learning and Artificial Intelligence in Analytics by deepak pal via Flickr (CC BY 2.0)

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.
Region: 
« Go back