Inside the Digital Society: The future is another country

“The past is a foreign country,” wrote L.P. Hartley at the start of his novel, The Go-Between, “they do things differently there.”

Times change, in other words. Today’s ways of living are different from how things used to be, and technology in recent years has played a big part in making change.

But the pace of change is accelerating, and there’s been a lot of talk these last few months that it’s accelerating very fast: that the future, even the near-term future, could be very different indeed from how things are today. It, too, will be another country.

The new tech on the block

The talk’s about generative AI; large language models; new applications that will allow all of us (or so it’s claimed) to delegate a lot of what we do to algorithms developed by data corporations that will run on our devices (and within ‘their’ clouds).

New programmes such as GPT-4 exploit machine learning in ways that offer us what we want by way of text and images, code and video, art and music. They’ll write our student essays for us, write our news (for real or fake) or academic papers. They’ll find new drugs and chemicals (for good or ill), they’ll win us arguments, become the digital products that we need to fulfil dreams. Just read the instructions, or follow intuition. Or so it’s promised.

There’s a race to market going on between the data corporations, to offer programmes that will take away from us the burdens that can be relieved through access to the massive data sets that we now generate online and they control.

Will it be ‘transformational’?

There are different views out there on this.

There’s agreement it will likely be transformative, a step change in technology, like the internet some 30 years ago but much, much faster in the way that it transforms.

So fast, indeed, that governments/regulators, users and even tech businesses themselves will struggle to keep up. And pervasive, at least in those societies that are high-tech to start with.

This innovation generation has the potential, in short, to shift the balance in very much of what we do from human-led to digital. To be the step change that the metaverse is not.

What can it do?

Many people have been playing with these innovations as they’ve been rolled out by corporations keen to be first in the market. And many of those people find they’re thrilled (while also spotting limitations in these early iterations).

Academics have been stunned/alarmed, for instance, by how readily new programmes can generate convincing student essays (though they sometimes seem at present to invent citations rather than identifying real ones). How can they use them to improve their articles? How they can reliably assess student performance?

Journalists are wondering how AI-generated tools can help improve news-gathering and writing. Wondering how hard it’s going to be in future to tell journalists’ words from computer-generated text and how easy to tell real information from ‘fake news’. Wondering whether AI-generated tools will prove cheaper than journalists. (Why should a publisher pay a writer if a programme can write a better story?)

Programmers have found that coding can be done much faster and much better by their digital companions (though they don’t always understand why they’re doing what they do). Scientists are searching for wonder drugs (for good or ill); corporations (and no doubt criminals) for new ways of maximising their financial gains.

Three views

So what are people saying?

There’s a deal of wild enthusiasm, from nerdy geeks who think the whole thing’s cool to serious players in government, business and technology who think the gains from better ways of doing things will massively outweigh the risks.

Earlier waves of innovation - including the internet itself, the World Wide Web and social media – were greeted likewise. The prospect of a ‘magic bullet’ that can find solutions to problems that we’ve found intractable is most alluring. That’s a good thing, technophiles believe, and may also give us humans more time to be creative and enjoy ourselves.

Others, though, are keen to show that ‘magic bullets’ aren’t just magic, they are also bullets. They point to three risks in particular:

  • That new technology’s as valuable to those whose aim is harm (and valued by them) as it is to those whose aim’s benign;

  • That delegating much of what we do and the ways we make decisions to algorithms that we little understand reduces human agency and hands great power to technology itself and those in charge of that technology (governments, including some that are authoritarian; corporations, including some that have little interest in the public interest);

  • That ‘moving fast and breaking things’ – the old mantra of Facebook that’s enjoying a revival here – is irreversible: if ways we do things that we want are broken by technology, we’ll be unable to restore them if that’s what we find we want to do.

The pace of change – in technology and impact – is important here. A third view argues both opportunity and risk are over-hyped, and that we have more time to work out what we want than thought by fans and foes of AI's forward rush. Whether that is true depends not just on time, but on the willingness and capability of governments and other stakeholders to build consensus and establish norms and principles to guide development: not something that the present world is finding easy anywhere.

Six points

So how should this debate move forward? I’ll make six points and then suggest some questions that we ought to keep in mind (asking ourselves, not just our algorithms).

First. This technology is now out of the bag. Whether it’s the fount of future knowledge and prosperity or existential harm (or both, or neither), it can’t be uninvented. The questions that we need to ask are to do with what it does/will do and how it should be governed.

Second. Any new technology can be used for good or ill, by authoritarian as well as democratic governments, by trolls as well as teachers, by criminals as well as advocates of human rights. Anything that’s good at writing code is good for writing malware. Assessing risk’s as vital as assessing opportunity.

Third. ‘Trust’ is fundamental to human society, and trust in many areas of life has been eroded in the last decade. More realistic fakery threatens to erode it further, and fear of this is justified. Consider, for instance, the impact of faked videos of politicians threatening war, inciting racial hatred or influencing election outcomes. Hence widespread concern with AI ethics; if this is the step change that it looks to be, ethics concerns become more urgent.

Fourth. All new technologies affect different societies in different ways. Access to them depends substantially on income and resources. Richer countries with more pervasive technologies and established digital sectors will have a head start over those that are poorer/less developed. People and businesses within countries, likewise. This has implications for international and domestic power structures and concerns about equality.

Fifth. The consequences of these innovations could quickly become irreversible, at least in small but also socially substantial ways, requiring mitigation in some cases and new kinds of response in many/most. If students routinely generate their essays through their algorithms, to take one simple instance, new ways must be found for judging actual understanding of their subjects. If algorithms are writing news, we need to know what tweaks their text.

Sixth. This new wave of techno-innovation disrupts international debates that are already underway. The next two years will see the 20-year review of commitments made at the World Summit on the Information Society (WSIS) and should see the adoption of the UN’s Global Digital Compact (GDC). UNESCO is debating principles for platform governance. If international initiatives like these don’t address the new platforms that AI’s generating, they won’t be adequate at setting norms.

Six questions

And what questions should we ask? I would suggest these six overarching questions, with subsidiaries, that should be in analysis and inform those international fora. (These are not meant, I’d add, as judgements for or against, but questions to be asked if we’re concerned to shape human society as it becomes more digital.)

First, who gains? As new AI platforms become more pervasive, which countries and which corporations gain, and which lose out? What are the implications for international politics and economic relationships, including the balance between rich and poorer countries? What risks arise in competition for ideas and competition over markets?

Second, what happens to power structures? How widespread will new algorithmic methods be within societies? Will they democratise participation, entrench the power of existing hierarchies (in government, business and technology), or enable the emergence of new dominant groups within society, economy and politics? What is their impact on inclusion and equality likely to be, and how might that be tweaked?

Third, how will these new technologies be used? How might/will they be used to improve public services and in the service of democracy? How might/will they be used by criminals and by authoritarian governments? How will their use by citizens in general affect the nature of society, of economic life, of interpersonal relations?

Fourth, what are the implications for trust within societies? Successful societies share important common understandings: norms like the rule of law, for instance, trust in the integrity of certain information sources, or in the stability of banking systems. Will new platforms reinforce or undermine these common understandings?

Fifth, what are the implications for human rights? Will algorithmic systems displace human judgement in contexts that concern these, for instance in administering justice? Should content made by ChatGPT benefit from the same ‘freedom of expression’ rights as human speech and writing? Will it be subject to libel and defamation laws? Are human rights upheld if decision-making is digitalised or are they threatened by a culture that thinks the computer’s always right?

And sixth, how can new systems such as these be governed? How can their impact be assessed (by us or by more algorithms)? How can experience be shared? What international mechanisms might establish principles, in today’s polarised environment? What would be the impact of leaving governance, in practice, to the market – in effect, technologists and corporations? Where lies accountability? Who can require transparency from algorithms? And who should be involved in setting norms, and how?

Conclusion

A few thoughts, then, about the implications of what many think a step change in the relationship between human and digital development; and some see as the start of a transition from the age of human to machine predominance (anthropocene to cybercene).

Of course, thanks to this new technology, you’ve just my word that what you’ve just read’s mine. For all you know, it might have been written at my prompt by one of these large language models. that "I" have been considering. So my last and underlying question to you is: Would you care? How much would and should it matter to you if I wrote this or it was written by an algorithm?

 

Image: Using ChatGPT by Focal Foto via Flickr (CC BY-NC 2.0)
 

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.
Région: 
« Retourner