Pasar al contenido principal

Some people view the future with excitement. Others look on it in trepidation. That’s especially so as we accelerate towards a digital society.  

To one side, cyber-optimists imagine a world in which technology cures all our ills and illnesses (some even think that it might conquer death). To the other, cyber-pessimists fear one in which we lose control to our digital creations (and those people that still hold power over them).

In between, we realists seek to navigate a route that maximises what is valuable and minimises what is not.  I’ve spent much of the last two weeks in workshops concerned with artificial intelligence and its potential impacts.  Some thoughts this week on implications for policy-making in the digital age; next week on employment.

How smart’s AI?

There are different views about the future amongst insiders. Some are keen to emphasise AI’s potential while also reassuring us that we’re a long way yet from seeing algorithms take charge. But, as I’ve argued earlier, it’s a transition process that we need to manage now that will have lasting impacts.  Distant possibilities don't undermine the need for that.

Algorithms already make decisions that affect our lives with relatively little human intervention. The extent to which they do so is increasing daily. The balance between algorithmic and human decision-making over your insurance policy, your loan, your university place, your right to benefits is changing.

What do people think of this? Dystopians believe we are and should be fearful. But there’s evidence that some people think the opposite: that some prefer algorithms to take decisions because they think they’re more objective than human decision-makers. It’s even been suggested that some businesses pretend things are AI when they are not to take advantage of this.

Yet, as we know, algorithms are only as good as the code in which they’re written; and machine learning’s only as good as the data that it feeds on. The biases of coders and the biases in data sets affect the outcomes of the algorithms; thinking that they’re more objective than are people leads to dangerous complacency.

What’s the policy goal?

I’ve argued frequently that the most important question facing us is this: Do we seek to shape the digital society or do we allow digitalisation – the technology and the corporations that now largely govern it – to shape the future for us?

At the moment, to a large extent, the latter’s the default. Rapid technological change isn’t only offering superficial alternatives (many good, some less desirable) to the ways we’re used to doing things; it’s altering the deeper structures of our economies, societies and cultures – the ways that we relate to one another, relationships between citizens and states, global economic power structures, the nature of employment, education, business, urban settlements.

If we want to shape these, we have to do so through policies that address such deeper economic, social and cultural impacts because it’s these deeper impacts, not superficial apps, that will really determine how we live in the digital age.

What are current policy priorities?

There’s a lot of talk today about the need for governments to develop policies for the digital society (or the fourth industrial revolution, or AI). Many are putting strategies in place. But most of these, I’d say, are too narrow to meet the goal I’ve just described.

This is often because they’re evangelical rather than analytical. They’re concerned only with the first half of the task that I identified: how to maximise the benefits.  China’s government wants to be number one in artificial intelligence. Britain’s wants to be the safest place to go online. Both want their firms to come out top in the global race to digital.  

These policies see innovation and digital developments primarily as opportunities for national growth.  They’re about making the country ‘great’ or ‘greater’ or even ‘great again’.  They are, essentially, industrial policies, which is fine in itself – I believe in the value of industrial policies – but only half the story when it comes to shaping future societies.

The fallacy of ‘nice IT’

The problem here, I’d say, is threefold.

First, it’s built around the false premise that IT is 'nice' – benign, progressive, positive, can be trusted to deliver gains for all. But we know that isn’t so. IT’s enabling. It enables everyone – criminals, authoritarians, exploitative businesses as much as saints, democrats, liberals and businesses who take social responsibilities to heart.  

In any case, as with all technologies, it’s the unexpected, unanticipated outcomes that are often most significant (think climate change; think social media). So policies that seek to shape the digital society should be built as much around analysis of risk and of potential harm as they are round opportunity.

The fallacy that IT is enough

Second, too many policies focus on ICTs as it they’re separate from their economic/social contexts. That they’re solutions to the challenges of infrastructure, industry, economy, society and culture, rather than embedded in them.

Tech policy’s too often made by techies thinking of their own technologies, with the backing of politicians, think tanks and consultancies seduced by techie certainties. To move fast and break things may suit innovators and their commercial sponsors, but it’s not a good basis for national prosperity, welfare or inclusion. It’s not just things that get broken, it’s people too.

The fallacy of digital exceptionalism

Third, policies towards the ICT sector are too often built around digital exceptionalism (which partly stems from the fallacy of nice IT, above).  Data businesses and technologists argue, in effect, that they should be exempted from the kind of governance mechanisms that apply to other economic sectors because these would impede innovation and prosperity.  

This, it seems to me, is problematic, for two reasons.

First, it implies that technology and industry, not public policy and government, should shape society. But there are good reasons for the protective principles that are applied in other economic sectors: in agriculture and food production, mining and manufacturing, financial services, pharmaceuticals, genetic engineering. These balance the priorities of business with the non-commercial needs of ordinary people (individual and collective).  Environments are less polluted, people are safer and less exploited thanks to them.

And it’s not necessarily the case that regulation stifles innovation; it can also stimulate it. Rigorous testing of drugs before they come to market leads to more sophisticated innovation, better drugs and fewer deaths. Environmental audits lead to better, not poorer, technology development and avoid blind allies (like Bitcoin’s unsustainably prodigal use of energy resources). Today’s telecommunications markets are more effective at delivering access because they were regulated when they were liberalised, requiring businesses to address the public interest. Football and chess are better games because they’re played by rules.

Innovation and the public interest

The balance between innovation and the public interest is what’s crucial here. Do we seek to shape the digital society or do we allow digitalisation to shape the future for us?  If the former, then we need a more,  not less, substantial dialogue between those who innovate, those who commercialise innovation, and those whose role is to protect and to promote wider public welfare, including issues of inclusion and equality, environmental sustainability, development and human rights.  

There are two underlying challenges here which make this particularly difficult. 

The first’s the shifting power structure within the digital society. Digital development, including standard setting, is now dominated by a small number of corporations, based in two countries, with very concentrated ownership and management structures. This is far from the ‘multistakeholder ideal’ that was advanced when the World Summit on the Information Society set out its parameters to shape the Information Society fifteen years ago.  

The second – a familiar theme here – is the pace of change in new technology. This is faster than our institutional capacity to analyse, develop policies or implement them. The answer to this, though, is not to give up on policy and hope that techies (and those that profit from them) deliver for the best. There’s been some discussion of ‘adaptive’ policy/regulation, which adjusts dynamically as circumstances change, which need consideration. 

Navigating the route between maximising what is valuable and minimising what is not in an increasingly digital environment requires new thinking about the relationship between innovation and public policy. There’s scope and need for innovation in the governance of technology as well as in technology itself. Now there’s a task for multistakeholder cooperation.

Image: "Person kneeling inside building", by Drew Graham on

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.