Inside the Digital Society: Digital and human ways of thinking

Here are two related questions about the relationship between digital technology – to illustrate, I’ll say ‘AI’ – and humans:

  • what can AI do that humans can’t? and

  • what can humans do that AI can’t?

They’re questions that will become more and more important over the next decade.

What can AI do that humans can’t (or won’t)?

The first is maybe easier to answer. The processing power that’s available today/tomorrow means that computers, AI, datafication can/will analyse information and reach conclusions on a scale that humans simply can’t. Crunching massive data sets means that decisions can/will be taken that are ‘better informed’ and that complex processes (such as traffic management with automated vehicles) can/will be run though they’re beyond our human capabilities.

There are worries in this, obviously: about inequality, entrenching biases in data sets, explicability (the fact that most of us, and maybe all of us, can’t work out how decisions will now be made). But there are gains as well in areas that people care about – whether that’s quicker and more accurate diagnosis, or smarter matches to the things we like (album tracks, political polemics, cat memes or jokes).

What can humans do that AI can’t (or won’t)?

The answer to the second’s more elusive. Enthusiasts for automation talk optimistically about people moving on to jobs that are based on empathy and creativity – social care, for instance, and the arts. Or about the creation of new employment sectors that we haven’t yet imagined.

In practice, there’s a lot of aspiration and uncertainty in this. The caring and creative sectors aren’t well-paid or infinite. ‘Robots’ are already being built to do some roles in social care. Oxford mathematician Marcus du Sautoy lectures regularly on digital attempts to mimic the genius of past painters and composers. (It’s hard to mimic genius, but muzak isn’t genius and a lot more music’s muzak than is Mozart.)

And so to fiction

I’m prompted to think more about the links between these questions – and their complex and less binary implications – by a newly published novel, Klara and the Sun, by Nobel winner Kazuo Ishiguro.

I won’t reveal the plot (which has, I think, both power and flaws), but just the premise. The question at its 'heart' (I use that word deliberately, with ambiguity, as does the author) concerns the difference between digital and human understanding. Can digital thinking – by algorithm, by AI, in this case by an Artificial Friend (called Klara) – replicate human ways of thinking? Can it emulate it? Can it even comprehend it? And, likewise, can we, as humans, understand the way digital thinking works?

This is an old theme in science fiction, of course. The most interesting characters in the various Star Trek series are those that are not human (wholly or partly) who are trying to understand human behaviour and interact with it effectively (for fans, I mean here Mr Spock, Commander Data, Seven of Nine and the holographic doctor; I appreciate that other readers may have never seen Star Trek).

This is, though, a two-way street. Ishiguro’s digital hero(ine) interprets human behaviour logically, using the information that’s available to her. But that information’s incomplete and human behaviour is often irrational or unpredictable. Human choices, certainly, aren’t binary. And so she misinterprets.

Well-meaning algorithmic decisions don’t necessarily meet human needs. This happens outside novels, not just in them.  At the same time, Ishiguro’s human characters don’t understand the ways that digital devices/people think. They expect precision, accuracy, what might be called ‘correctness’.

There’s a widespread assumption, not least among human decision-makers, that digital decisions must be better because they’re based on data and smart programming. It’s coupled with widespread ignorance, not least among decision-makers, about how digital decisions are being made and why: about the logic being used, and (as important) the logic that is missing.

A real world example

A good real world example of this mismatch happened in Britain last year, when it was not possible to hold exams for school-leavers because of the pandemic. An algorithm was devised to award school-leaving grades, as happens in such cases, and that algorithm was deemed ‘good’ by ‘experts’. It used teachers’ estimates and other data including schools’ average performance in recent years.

Astonishing as it may seem, the flaw in this – that high-performing students from low-performing schools would have their grades reduced while low-performing students from high-performing schools would have them raised – was not spotted until a generation of school-leavers had received their flawed ‘results’. The algorithm was junked and students graded instead by teachers’ assessments.

Two things, I think, occurred in this that will occur more widely (and less obviously) elsewhere.

First, the algorithm, like Klara, relied on poor/inadequate information and could not deal effectively with what was less predictable (outliers). And second it was opaque and inexplicable. Its workings weren’t transparent but were locked in mathematics. Officials who overvalued the magic of algorithms initially defended it. Students, parents, teachers all knew that it was wrong.

Lessons should be learnt from this about the extent to which decisions over people’s lives can be resolved by algorithms alone – not just for next year’s grading but in many other areas of government. One wonders if they will.

A real world problem

Much of the anger generated in Britain’s grading debacle resulted from students’, parents’ and teachers’ sense that they had no control or even influence over decisions that would have lasting impacts on their lives.

That is, of course, not just a problem that is digital. Subjects of many governments have little or no say in the decisions that affect their lives. But citizens in many countries do have expectations of transparency, accountability and some degree of influence upon those outcomes.

Opaque algorithms, which are not understood by the officials running them let alone by citizens affected, are problematic. They may well deliver many public goods – earlier diagnosis, better traffic management, better targeting of welfare benefits – but if they don’t, they’re going to alienate, as in that schools example.

Even where they do deliver benefits, opaque and automated decision-making processes leave people feeling disenfranchised, seeing unfairness even where there’s none. If algorithms are going to make decisions that affect our lives, they need to be transparent and explainable – and to have scope for them to be corrected by human intervention.

Digital and/or human judgement

So where does this leave the balance between digital and human judgement?

Many digital insiders assume that digital decisions, inherently, are better. It’s easy to present them as more ‘objective’, though we know they’re influenced by the quality of data (often derived from poor or biased sources) and the ways in which algorithms have been trained (how the machines have learned). Decision-makers in all sectors – government, business and, yes, civil society – are, in spite of this, often seduced by algorithms’ apparent objectivity, as well as by greater efficiency and expected savings in expenditure.

Should human judgement be so easily discounted? The Cambridge economist Diane Coyle suggested otherwise in a (typically thought-provoking) lecture that I saw last week.

Algorithms are good at doing things that can be formulated, she suggested, but AI’s limits can be found in dealing with complexity and with uncertainty – while societies and economies today are becoming more, not less, complex and unpredictable. Algorithms are much less effective at dealing with complex, uncertain issues that need nuanced judgements where there can be multiple potential outcomes (positive, negative and, in most cases, both).

Greater complexity and uncertainty, she suggests, should favour human judgement over algorithms. Competitive advantage could therefore lie with those (companies, at least) that rely on nuanced human judgement over standard algorithmic thinking. They might, indeed, be the more innovative.

Three final thoughts

I’d suggest three final thoughts on this.

First, it seems to me that much of this discussion between digital and human judgement is about the difference between calculation and intuition (or some blend of experience and expertise). AI favours the former, where traditional public policy favours the latter. In practice, I’d suggest, it’s the combination of the two that will be most effective: calculation tempered by intuition; intuition tested by calculation.

Second, the differences between potential outcomes may be marginal. And the choices to be made are frequently political rather than technical: to benefit some social groups at the expense of others, for example, or to favour one policy goal (maybe environmental sustainability) over another (maybe economic growth). These aren’t judgements humans really want outsourced to algorithms that won’t experience the consequences.

And third, there may be a big difference between the ways that different actors respond to Professor Coyle’s suggestion. High-end companies may well invest in human judgement more; those with lower expectations/aspirations and more limited resources are less likely so to do.

What concerns me most in this is what governments will do. Most are pressured to cut costs and taxes. The more deciding things by algorithm looks to be cheaper (as well as more efficient), the more pressured they will be to use them with insufficient human oversight. As British schoolkids learnt last year, that bodes a lot less well than government had hoped.

Image: "Big data on racks", by Mirko Tobias Schäfervia Unsplash.com

See also: 2019 Global Information Society Watch - Artificial Intelligence: Human Rights, Social Justice and Development

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.
Tópico: 
Región: 
« Volver