Aller au contenu principal

Let’s start this week with a touch of science-speculation, on the verge of science-fiction. And then move on to questions that it raises for the here and now. Stay with me, please, for these.

Where are the aliens? What are the aliens?

My starting point’s a talk last week from Martin Rees, distinguished professor of cosmology and astrophysics, in which he looked at what is called the Fermi Paradox, or (to put it bluntly) “why ain’t we seen the aliens?”

Or, adding a bit more substance: There are billions of planets in our galaxy, and billions of galaxies. It seems unlikely that life’s only appeared on earth. Or intelligent life, for that matter. Why aren’t there signs of it elsewhere – for instance in the radio signals that we see from space?

And there are common answers. Some suggest that earth life is unique, god-made or merest chance. Others point to the billions of miles 'twixt planets and the billions of years in which life or intelligence on any one of them could, first, flourish and then become extinct.

We live in a tiny window in both space and time. And we’re not that good as yet at working out what’s what in radio signals from big distances.

I’ve seen Rees’s take on this before. Its essence is that intelligent organic life – like us – is likely to be (anywhere) a mere interlude between early life – algae, plants – and even more intelligent but inorganic ‘life’, the product of technologies created by intelligence (first ours, then theirs).

In this context, as he puts it, “humans are an important transitional species.” The future of life, he suggests, if it has one, won’t be organic or biological; it won’t be planet-based because it won’t depend on what is found on planets (such as atmospheres); and, should we encounter it, we humans won’t be able to fathom its intentions.

So is this sci-fi?

Thinking about this naturally brings to mind the growing sci-fi literature (and some alarmist popular science) about the implications of our current rush towards AI. Literature that asks the question: will our descendants rule themselves or be governed by machines?

It’s a popular theme in movies, where most often it's dystopian. Think for instance of the Terminator and The Matrix franchises. In Iain M. Banks’ ‘Culture’ novels, by contrast, organic and inorganic intelligences live alongside one another in apparent harmony (though it’s the latter’s superior intelligence that clearly makes things tick).

One weakness of these sci-fis is that they find it hard to imagine motivation of a kind that isn’t ours. Their future machines tend to think and act like us, competitive products of evolution that we are, and just prove better at it. Which Rees thinks unlikely: why should their motivations be like ours?

The ‘singularity’

The key moment that’s envisaged here is often called the singularity, ‘a point in time when technological growth becomes uncontrollable and irreversible,’ as Wikipedia puts it.

Experts in AI today, such as Nigel Shadbolt, emphasise that we’re a long, long way from such a thing – and from the overarching ‘general’ artificial intelligence that would enable it (as opposed to AI for specific purposes decided by us in the here and now).

Nonetheless the concept’s troubled scientists like Stephen Hawking and pioneers of IT like Bill Gates. Rees’ argument suggests that, however distant, if we don’t first destroy ourselves through our technology (through weaponry or climate change), the trajectory we’re on means that we’ll get there sometime.

Why does this matter now?

I’m not concerned here with speculating on the distant future of humanity, but with what the logic underlying Rees’s view implies for now. Which is about the pace of change and how we handle it.

Two things he’s saying seem to me important for the moment, and for decisions we take now that are concerned with the short-term future, rather than remote horizons.

The pace of change

The first of these is that the pace of technological development outstrips the pace of, first, humans’ bio-evolution and, second, our social and intellectual development. What technology can do for us, and what it can do to us, are going to change more rapidly than we can easily adapt to it or even understand it.

That pace of change is going to give great power, in the short and medium term, to those that are in a position to harness and exploit technology. That power will be used in different ways, with lasting consequences.

Much is made today of the benign potential of AI, to which, it’s clear, there is great substance.  But equally, those that have vested interests in exploiting technology for personal, corporate or political advantage, will pursue them – quite possibly in ways that privilege technology over rights or public interest. ‘The price of liberty’ therefore includes, from now, ‘eternal vigilance’ over technology.

Explicability

And that will be more difficult because few of us will be equipped to understand what that technology is doing.

We may be teaching kids to code these days, but even AI scientists no longer know why their algorithms are reaching the conclusions that they do. As technology grows more advanced, and machines crunch ever greater data volumes, that will become more so.

Transparency and accountability are important aspects of democracy and public engagement. Decisions that are made by algorithms can’t be transparent or accountable if the algorithms themselves are not transparent, if the algorithms’ decisions aren’t comprehensible to those affected, and (even more so) if they’re not even comprehensible to those who run them and implement their outcomes.

A society that is governed more automatically will need to find ways of maintaining public confidence in the integrity of automation – and making sure that automation’s outcomes are consistent with human desires, human prosperity and human rights.

This affects much of public policy. Two aspects I’ll pick up.

Innovation

There’s a problem here, it seems to me, arising from our attitudes to innovation. Policymakers often treat this as an inherent good. It’s become a buzzword seeking automatic nods of approbation.

But there’s a big difference between saying that “innovation’s necessary to address the problems that we face” and believing that innovation’s always positive – or that we shouldn’t worry over whether it will prove so because it likely will and we can fix things when it won’t.

The former point is valid, and we see it widely, though it should always be remembered that innovation on its own is not enough to tackle social problems. (We’ve seen enough technocratic failures to know this, surely.)

The latter point is not. There’s innovation failure, just like market failure. Some innovation’s deadly, including innovation that’s intended to be quite the opposite. It was Alfred Nobel’s concern that he’d be remembered for innovations that had turned out deadly that led him to finance prizes for those that might be better.

More innovation’s driven by desire for profit than for social gain or out of curiosity. That’s legitimate but profitability and social gain can’t be equated (as innovations exploiting data privacy have shown). Our view of innovation ought to be more nuanced, more concerned with motivation, less blinded by optimism or hype, and more concerned with outcomes.

Transition

The second issue I’ll pick’s about transition.

Policy discourse about innovation often contrasts where we are today with where we could be once something different is in place.

Autonomous vehicles are a case in point. We think about our roads today, and then about how they’ll be when every vehicle’s autonomous, driven by autopilots exchanging data constantly with one another.

But the biggest problems will occur during transition – in this example, when some vehicles are autonomous and others not, the two types being managed very differently, their drivers not behaving like each other.

Technological transitions of this kind will always happen at different speeds in different places, too, and those different places will have different conditions. The dynamics of road use, for instance, differ hugely between the multilane highways and urban grid systems of the United States and the rutted tracks of many rural areas or the congested city roads of London, Mumbai or Nairobi.

Imagining how to get to outcomes is more difficult than imagining outcomes themselves. If we’re to make future technologies work well for us, we need to focus on how to manage the transition from what we have to what we want (or, in some cases, what we’ll get regardless). That requires far more attention to other public policies, from the quality of infrastructure to issues of equality, than techno-policy is used to getting.

Thinking ahead

The relationship between us and machines is complex and growing more so. We’re nowhere near the kind of change that Martin Rees projected in my opening paragraphs, and not yet near the kind that set along bells ringing for Hawking and Gates.

But we are seeing technological change outpace the capabilities of public policy, and we need to adjust the way we make decisions about it to make sure it doesn’t narrow policy options rather than broadening them. Three final points on that.

First, this requires more attention to framing the future that we want. Some have been dismissive of discussions around the ethics of AI because they think existing instruments of governance sufficient. I disagree. Parameters are changing, impossibilities becoming possibilities. Established mechanisms are no longer adequate to address what’s new and will be weaker still when it comes to dealing with what’s next.

Second, it requires more international cooperation. The prospects for this are looking poor. There’s competition for digital primacy between the US and China. Ideological differences abound between and within countries. Businesses, governments and citizens have different interests. Some governments and businesses see new technology as a way to increase political or economic dominance. Geopolitics are worsening.

And third, it requires rethinking how policy is made, not least retooling the processes and institutions that make it. Much of our thinking about how public policy should be is based on how things used to be (and that includes internet governance). It needs, rather, to be derived from how things will or ought to be. Which requires faster, more responsive processes, better informed citizens and policymakers, looking at long-term as well as short-term outcomes.  

Which are all in short supply, with prospects – see polices on climate change and ill-preparedness for the pandemic – looking less than bright.

Five questions

So I’ll end with these five questions that policymakers ought to think on now:

  • Who sets the boundaries between human and automated decision-making – and how much authority are we prepared (or do we think it safe) to cede to automation?
  • How do we deal with the variable pace of change between countries, communities and individuals – and its implications for (in)equalities of wealth and power?
  • How do we assess potential risks and opportunities – not least when most of us won’t understand the options, algorithms or consequences?
  • How do we deal with unintended use of new technologies – in weaponry, in surveillance, in commercial exploitation?
  • How do we evolve our institutions – including, those of us that have them, our democracies – to respond quickly and shape the future that we want?

 

Image: Message from artificial intelligence.. by Michael Cordedda via Flickr (CC BY 2.0).

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.