Five years ago, in an early instance of this blog, I contrasted cyber-optimists and cyber-pessimists.
Cyber-optimists, I wrote, believe that digitalisation equals progress, making the future better than the past. Cyber-pessimists, by contrast, are fearful of what digitalisation might bring about. Optimists look forward to an information society; pessimists worry about a surveillance society.
Both had, I said, a case. And in that post five years ago, I suggested a third approach that I called cyber-realism: by which I meant less hype, less panic; less polarisation, more analysis; less fatalism in the face of new technology and more effort to shape digitalisation in ways we’d like to bring about.
Looking at the history
I’ve been reviewing the history of digitalisation lately, and that’s made me think again about the way the balance between digital optimism and pessimism has shifted in the 30 years or so that it’s preoccupied me.
Those days of hope
Those of us who’re old enough can think back to the 1990s, which were heady days in which the dawn of digitalisation coincided with big shifts in geopolitics. Together they made for a time of hope.
The early 1990s saw the end of authoritarian rule in eastern Europe, of apartheid in South Africa and military governments elsewhere. Democracy and accountability became established where previously they weren’t. Human rights were more respected. More responsive approaches to development were being up and adopted. The importance of sustainability was recognised.
The kind of politics that many called ‘progressive’ seemed on the rise. And that was matched by a new interest in technology’s potential.
Terms like ‘ICT4D’ (information and communication technologies for development) that became fashionable in the later 1990s, seemed to offer ways of making progress to overcome challenges in development that had seemed intractable, and began to be adopted by development agencies and the United Nations.
Three grounds for hope
Words like ‘leapfrogging’ were often used at this time to describe these ICTs’ potential. Hopes for democratisation and empowerment enthused digital pioneers, (some) development agencies and ‘social progressives’. The apogee of this belief in technology’s potential came with the Geneva Declaration of Principles and Plan of Action at the first World Summit on the Information Society in 2003.
Reflecting back on it, it seems to me that there were three main elements to this enthusiasm:
The idea that information, made more readily available, would empower individuals to pursue their goals more effectively, enabling them to overcome marginalisation, maximise their personal and communal potential, and challenge (potentially corrupted) power élites;
The belief that communication, made more accessible through mobile phones, would strengthen economic opportunities, build stronger relationships within communities, and enable collective action and political expression in ways that were much stronger than they’d been before;
And the conviction that new technologies would bring new ways of delivering development, from monitoring weather patterns to support farmers plan their work, to facilitating access to capital through mobile money markets, to mitigating natural disasters, and every other aspect of development between.
The other sides of hope
What’s happened since? Well, all those things have happened to a large extent, but not alone.
Billions now use the information resources available online to pursue their goals and very many thereby gain both socially and economically.
New modes of communication enabled by technology have transformed social relationships and radically altered patterns of economic production.
New technologies have delivered new ways of addressing development challenges that were previously intractable.
And new waves of technology have added to the scale of opportunities and of potential.
The optimism, in short, that fed the 1990s has in many ways been justified. But each of those three elements of ‘progress’ has had its countervailing negatives, and those negatives have fostered pessimism (just as justifiably):
The information revolution has been accompanied by one in misinformation and disinformation. The internet has proved to be the most powerful vector for propaganda as well as the most powerful for information that’s of value.
The huge expansion people have experienced in communication, not least through social networks, has not just extended their connections; it’s often focused their attention on those who think like them, fostering polarisation more than dialogue (see US politics, for instance).
Not all development challenges have been positively changed by new technology. Inequality is generally agreed now to have grown as new technology’s been more available to those who can afford to buy and use it more. New challenges have arisen for the environment. Cybersecurity's added a new dimension to the care that's needed in achieving personal and national goals.
And new waves of surveillance – by companies and governments – have added to the risks of new technology, changing the ways in which states, businesses and citizens now interact.
What about the geopolitics? And what about the politics?
What has happened to geopolitics in this era of the internet? And what has happened to the politics of many countries?
That happy coincidence of digitalisation and geopolitics before and around the turn of the century led many to think that the information society would usher in a time of empowerment for the individual, accountability in government, democratic participation, greater respect for human rights.
Look about you and you’ll see that hasn’t happened. Geopolitics today is much more polarised and dangerous than it was 30 years ago. I’m writing at a time of war and conflict in my own continent, Europe, and it’s not alone. Authoritarianism is on the rise, and democratic institutions are much weaker than they were. Freedom House and other agencies talk of a decade and more of declining democracy and growing threats to human rights.
The age of the internet, in short, has not proved to be the age of empowerment and respect for rights that had been hoped for. Optimists in those earlier times discounted two factors that have proved especially important:
That, as mentioned earlier, the internet’s as powerful a vector for disinformation/propaganda as it is for information;
And that data-driven technologies were always going to enable far greater surveillance of people’s lives, behaviour and opinions than was possible in previous generations.
Technology has not transformed power structures but interacted with them. It’s more effectively deployed by those with power than those without. Innovations such as social media, which can be used to facilitate public protest as they did in the so-called ‘Arab Spring’, can also be used to monitor it and to control it, as has happened in the years that followed.
Optimism and pessimism today
Which brings me back to optimism and pessimism now, at a time when technology is making new leaps forward.
Much of the discussion of AI today mirrors discussions of the so-much-simpler ‘Information Society’ two decades or so ago.
Optimists about AI point to the wonders that it could perform, across all fields of our endeavour. Pessimists worry that it will disempower individuals, be abused by powerful interests, substitute machine control for human interaction.
I’ve been struck in this debate by one thing in particular.
Most of the optimism I hear comes from technologists, who see technology’s potential for improving things and think that it’s technology that’s going to drive what happens. Their vision is supported, though for other reasons, by tech businesses that see their future profits driven by AI and seek to build their business bases round it.
Most of the pessimism I hear’s from social scientists, who focus on human behaviour. It's people, in their eyes, who will determine how technology is used, and that raises deep concerns about the potential for technology to reinforce power structures, manipulate societies and ignore the long-term risks that are entailed.
I ended that blog post I wrote five years ago with four shifts in thinking that I thought were at the heart of ‘cyber-realism’, the alternative I posed between optimism and pessimism, hopes of utopia and fears of dystopia.
First, I suggested, we should stop thinking that the Information Society, as we then called it, would be ‘good’ or ‘bad’ but recognise it would be both.
Second, I thought, we should recognise that it’s already here, evolving rapidly, and changing how we do things, and that if we don’t consider in advance how we want the innovations we experience to shape our lives, they’ll shape our lives in ways that we don’t want.
Third, having acknowledged that, I said that we should recognise we have the right to shape the Information Society, rather than simply accepting what technology and markets give us: indeed, that shaping it’s the only way in which we can protect our human rights, our natural environment, the things we value – and in which we can prevent the things we fear.
Lastly, I suggested that we should consider today’s technology from the perspective of the future: by looking at what’s coming (and its risks as well as opportunities) rather than focusing on what was, what is and the short-term.
I think that those four points still stand, but with a warning. I think that technology, in many ways, is going to shape society, but – as a social scientist – think it’s the way that people use technology, and the way our institutions seek to manage or not manage it, that’s going to shape that shaping. And the people that will shape it most are those with power, not those without.
Digitalisation will affect the future of business, politics and geopolitics – but business, politics and geopolitics will shape the way they do so. There were more grounds for optimism arising from that in the 1990s than there are today; and more grounds for pessimism today than there were back then.
Image: Tagged! by JD Hancock via Flickr (CC BY 2.0).