Inside the Digital Society: AI hopes and fears

Artificial intelligence has been the topic of the year in digital discourse. Or, more especially, large language models (LLMs) like ChatGPT that have demonstrated the ability to write language and make images and videos that look credible in a way that previous attempts have not.

These LLMs have thrilled, excited and alarmed. They’ve the potential to be highly disruptive to the ways we humans do things, from marking students’ essays to influencing voters, improving search engines and facilitating innovation. Though they’ve not been helped by a propensity to make things up – inventing non-existent sources, for example, to cite in those essays, a flaw that has been called ‘hallucination’.

It's about more than GPT

This blog’s not about these LLMs but bigger debates they have inspired about the ways that AI will change our lives for good and ill in future. These have become more polarised during the year that LLMs have become so fashionable, and more polarised than those around the dawning of the internet (which was generally thought to be for good).

Optimists and pessimists

On the one side stand the optimists – tech bros, big data corporations, some governments and international entities – that focus on the dazzling opportunities they see for fixing long-term problems. On the other stand the pessimists, some fearing that a time will come when AI could displace us in deciding human destinies, even some day decide we’re surplus to requirements.

Science fiction fans might spot successors here to the benign conjunction of human and digital lifeforms in Ian M. Banks’ Culture novels, on the one hand, and, on the other, the dystopian contortions of The Matrix and of Star Trek’s Borg.

Varieties of pessimism

Those whose fears are existential – will humanity survive or lose control? – include some of AI’s pioneers, who have urged caution on their peers to ensure that innovation doesn’t run away with them. Their alarm’s been criticised not just by those who believe that AI will bring about a new enlightenment. It’s also criticised by those who think scare stories of the long-term future are a distraction (perhaps deliberate) from concerns about what's going to happen in the shorter term.

For pessimism comes in several forms. Many who are not digital insiders are primarily concerned with short-term impacts – the loss of jobs, including professional jobs, that will result from AI models’ greater knowledge, computational ability and decision-making power; the use of new resources by politicians, companies and criminals to manipulate opinion, change behaviour and exploit the vulnerable; the impact they will have on equality, democracy and human rights; the loss of agency in how we choose to live our lives as more and more decisions that affect us come to be made by algorithms.

The timescale I would focus on is in the middle. Long before we reach the risk of existential crisis, there’ll come a time when the nexus between AI, human enterprise and governance becomes the norm, enabling more decisions to be automated without transparency or agency for those affected by them. This conjunction of human authority with digital determinism will affect (and likely concentrate) power structures and, because of speed and scale, throw up more “unknown unknowns” than previous technologies (including internet).  Our societies are ill-equipped to deal with these.

The wider context

And what's the context for this digital development? There’s growing competition between major global powers and increased instability in geopolitics; conflict in trouble-spots throughout the world; an upsurge in populism/demagogic politics and authoritarianism. There are growing imbalances of power between large and small businesses, and between developed and developing economies. The destabilising impact of climate change is beginning to be felt. This is not a great environment in which to welcome further instability.

International initiatives

There has, as a result, been rapid growth in the number of initiatives that are looking at the ethics of AI – how to make its evolution consistent with human rights and sustainable development. These are coming from international organisations such as UNESCO and the G7 group of industrial economies, from governments like those in the US and Britain, from business consortia, think tanks and academic institutions.

There are powerful vested interests involved in some of these, especially companies that are pioneering AI and the governments that hope to profit from their presence. There’s a balance being sought between anxiety, commerce and national interest.

Dealing with the existential

We’ve some (mixed) experience of dealing with what have been seen as existential problems for humanity since the United Nations was established to reduce the risk of recurring global conflicts. Two examples from that experience have been suggested as models for handling the future of AI.

The International Atomic Energy Agency has proved a reasonably effective forum for managing the benefits and existential risks of atomic power. Secure and peaceful use of nuclear technology has taken place, and the awesome risk of nuclear conflict has been held at bay, with the help of (sometimes fragile) international agreements on non-proliferation and arms control. The risk of nuclear conflict remains significant but has receded in most people’s minds since the 1960s.

The International Panel on Climate Change has been less successful – perhaps because the existential impact that it’s set up to deal with is gradual rather than immediate (as nuclear conflict would be).  Climate-changing emissions have continued to grow. Stated commitments are not being achieved by governments and there has, if anything, been an upsurge recently in climate scepticism undermining those commitments.

Responsible innovation

The comparable goal with AI’s also concerned with global safety – with an approach that’s sometimes called ‘responsible innovation’ which aims to maximise the gains and minimise the risks that can (and can’t yet) be foreseen. That goal can be found in the UN Secretary-General’s approach to digital cooperation (and the new advisory board that he’s set up to consider what it means) as well as other initiatives concerned with AI ethics.

These approaches face challenging circumstances. The technology’s evolving very fast. The race to be first – between big tech corporations and the countries where they’re based (particularly the US and China) – is well and truly underway, and the prize of winning is considered crucial to future commercial success and national economic power.

Tech firms are used to innovating first and dealing with the consequences later (‘permissionless innovation’ as it’s often called) rather than the culture of precaution which has been familiar in other innovative sectors such as genetics, nuclear energy and pharmaceuticals, or with some environmental impacts.

There’ll be pressure, too, on governments everywhere not to get left behind in the contest to implement AI. Some will take advantage of new opportunities to control their populations. Most or all are likely to seize opportunities to reduce expenditure (and taxes) where they can by automating public service processes and replacing human with algorithmic decision-making.

And non-governments will take advantage of new technologies to meet their goals: not just health managers and teachers trying to improve their services, but non-digital businesses seeking to maximise profits, political actors wanting to manipulate opinion and criminals finding new ways to exploit the vulnerable.

There’s no reverse gear for AI

The integration of AI into economy and society is already happening and will accelerate. No one believes this wheel is going to be un-invented or that it can be put into reverse. The questions that this raises are important across the board, and predictions about what will happen – to the technology itself and to the impacts it will have – are very hard to make. The gains from getting AI’s development and regulation right could be tremendous, but so are the risks from getting it wrong.The gains from getting AI’s development and regulation right could be tremendous, but so are the risks from getting it wrong. There’s little time to plan ahead, and the scope for international cooperation is weaker now than it’s been for a decade.

Who’ll be in charge of its development?

I’ll end with one of many questions that arise from this: one that’s central to those international fora that are currently discussing where AI should go, that should also be addressed by the technologists, businesses and governments that are at its cutting edge, and indeed by those of us (governments, businesses, civil society entities and citizens) that aren’t.  That question's who will be in charge of its development.

AI innovation’s highly concentrated: in the US and China (with significant, but less decisive, work in other richer regions), and in a small number of very high-tech companies. Even in those countries and those companies few people have high-level understanding of the technology itself, its potential value (societal, commercial, governmental, military), or its risks. Those that do - AI insiders - come from a very narrow range of backgrounds (predominantly male, techno-centric with little background in the humanities or social sciences, and, thanks to their roles in their technology, financially well-off).

One thing that AI’s not is multistakeholder. Decisions that will establish the parameters for its future development and impact are being made in the boardrooms of tech corporations, by technologists and business managers, with little input from civil society or many governments (especially the Global South).  Even insiders making those decisions have little idea or consensus about what’s around the corner. Developing countries in particular are going to end up using AI systems that are designed within developed economies, with little reference to their needs or circumstances.

While this worries some of AI’s pioneers, others - like Yann LeCun, chief AI voice at Meta/Facebook - are convinced that technologists will get things right and regulators get things wrong. Should governments and citizens really leave something this important to technologists? And, if not, how can non-technologists insert their priorities into the processes of innovation that are taking place in the testbeds of technologists?

That’s my question for today. More questions on the future of AI next year.  Next time, some thoughts on digitalisation and democracy.

 

Image: AI for Good Global Summit 2023 by ITU Pictures via Flickr (CC BY-NC-SA 2.0 DEED)

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.

 

Region: 
« Go back