Inside the Digital Society: Notes on a Scandal

It’s been a while since my last blog. I’ve been busy working on a number of issues that are going to come up later in the year – the relationship between the digital economy and the environment; ways of monitoring national internet environments; the forthcoming review of the World Summit on the Information Society (WSIS).

It’s going to be a big year for international discussions about digital issues – with the UN’s Global Digital Compact (GDC) due to be negotiated as well as first steps towards that WSIS review. But I want to start this year with something more domestic.

Computer says what?

I don’t usually write about things in my own country, but there’s a scandal here that’s been making waves and illustrates important points about the way we view digitalisation. It involves the British Post Office, a big computer system that went wrong, the harm that did to people’s lives, and the failure of our checks and balances to scrutinise it properly. It’s been described by the country’s prime minister as “one of the greatest miscarriages of justice in [the] nation’s history.” 

Some background

You’ll need some background before I come to points that apply to everyone and every country.  Bear with me while I tell the tale.

The Post Office in Britain has offices in places up and down the land. Most of these aren’t run directly but as franchises. They’re inside local shops, parts of independent businesses, often family firms, with the postal part of the business overseen by the Post Office Limited (which is a government-owned company).

There are more than 10,000 of these subpostmasters (as they are called) around the country. They’re valued parts of local communities – part of the social glue that holds places together.

A false Horizon

In the late 1990s, the Post Office introduced new software called Horizon to manage the business these franchises do on its behalf. That software came from a major IT company (Fujitsu, as it happens) and it was rolled out across the franchise network.

And then something strange began to happen. Horizon said that it was finding money going missing. Lots of it in lots of places. Between 1999 and 2015, 4000 subpostmasters were accused of fraud or stealing from the system. The Post Office prosecuted around 900 of them and quite a lot of lawyers advised them to plead guilty to avoid worse punishments than they would get if they proclaimed their innocence.

Some 700 were convicted. More than 200 went to prison. They were vilified in the communities they served. Some were bankrupted because they were made to pay back money they appeared to owe (but didn’t). Marriages were ruined. A few committed suicide.

A credulous system

Now, you might think it odd that so many subpostmasters might find themselves in this predicament. And you’d be right. Thing is, it wasn’t the subpostmasters that were guilty of false accounting, it was the software.

That software was faulty but it was believed – by the Post Office, by its lawyers and those for its victims, by the judicial system that convicted them, by the communities in which they lived, by the majority of Britain’s media.

It was inventing numbers that weren’t true and innocent small businesses were forced to pay the price of its mistakes. To use a word that’s fashionable now as we get used to generative AI, it was “hallucinating”.

A failure of both systems

This was a failure of both technology and human systems. The software didn’t work, but too many people thought that whatever the computer said had to be right.

There was clear evidence that the software was faulty, from an early stage, but it was not exposed. Postmasters reported problems, but the Post Office was in denial. The widespread nature of the problem was not revealed to those involved in court – those being prosecuted, their lawyers, the wider public. Some victims were told repeatedly their cases were unique while those in charge were being told they weren’t.

This story has become a scandal now, a quarter of a century since it began, much more recently since the latest prosecutions happened. Very belated action’s being taken to compensate the victims (though compensation’s far from generous and can’t make up for ruined lives). You can read more about the scandal here.

The failure of the media

There’s one more point to make before I reach for generalities, and that’s the media’s failure. This story was barely reported by mainstream media until very recently. The only media outlets that did serious investigative journalism, and did it over a long period, were a specialist IT journal called Computer Weekly and Private Eye, a satirical magazine that also has a record for forensic journalism.

A few Members of Parliament got involved, but the main campaign was led by an early victim of the software who organised his fellow victims. He too had a hard time getting heard in mainstream media.

What finally broke the story to the wider public, and led to action being taken, wasn’t even journalism: it was a TV drama based on the story of that early victim and his campaigning work.

So what has this to do with us?

Now, this may seem a local story that concerns bad practice in one country, but it has much wider resonance. I’ll make four points.

Trust in computer systems

The first, of course, is that there’s danger in trusting everything that digital systems tell us is the case.

Bad decisions get made by computers just as by the human systems they’re replacing. They can be caused by failures in programming – the bugs that were the cause of Horizon’s false accounting – or they can be caused by biases in the data on which AI systems now are trained.

Many people, and many systems, though, trust the decisions that computers make more than they trust those that are made by other people.

Clearly there are times when computer judgment is better than that of humans – for instance, in early medical detection. But ceding power to automated processes – thinking that “computer says yes” or “no” should be definitive – is dangerous, especially where human lives and livelihoods are concerned.

Such processes ought to be tools, not substitutes. The tale above is one example, but the risk is true in many systems. Decision-making should be subject to review.

Failing institutions

My second point’s to do with failing checks and balances.

Institutional mechanisms should have spotted and addressed this problem long before it ruined quite so many lives. The proportion of this small business community that was alleged to be committing fraud was absurdly high. No prior evidence suggested such a thing. Yet the checks and balances that should have been in place all failed to point this out.

There was no systematic audit undertaken of the software, not even when (from a very early stage) the Post Office became aware of problems. Management systems in both Fujitsu and the Post Office obviously failed and spent more time protecting themselves than they did inquiring into what was going wrong (a public inquiry is now underway). The judicial system failed to recognise the scale of the problem and ask questions that should clearly have been asked. It proved extremely hard to get newspapers interested. Things were never properly interrogated.

It's not just that computers are too trusted to get things right, when they don’t always; it’s also that our systems need adjustment to make them capable of overseeing decision-making processes that have been automated. The checks and balances we have relied on up to now are not sufficient.

Where were the media?

The third point is specifically about the media. This scandal was not exposed by mainstream media. Years of persistence by a couple of niche magazines had little traction elsewhere bar the odd news story and one or two items on TV. What ultimately made the difference was a TV drama featuring some famous actors, based on the life of a persistent individual and the campaigning group of victims that he formed.

There’s nothing new about the years that it can take to expose scandal. Americans, for instance, could point to the time it took to reveal what underpinned the crisis caused by opioid addiction. But mainstream media in Britain has a long tradition of exposing scandals of one kind or another.  What failed this time?

One problem is that investigate journalism costs. Newspapers today are under big financial pressure, thanks to the loss of readership and advertising revenue that’s resulted from so many people’s choice of social media as their main source of news. Clickbait and celeb gossip are much cheaper ways of filling pages than the prodding and probing work that is needed in investigation.

The fact that a TV drama raised public concern is also interesting. There’s a tradition of that kind of drama too in Britain, but the impact that TV can have has also been diminishing - as a result of proliferating channels, time-shifting and online alternatives. Serious drama can still shift people’s views but it must be really big and really good to hit the headlines now.  It too is expensive compared with “reality TV” shows like Love Island , programmes on food or daytime chat shows that can fill the schedule.

Artificial intelligence?

My final point, of course, concerns AI.

AI will make automated decision-making much more widespread. Not only will more decisions be made without direct human involvement, they’ll be made through algorithms that those affected – and even those who manage them – will not understand and will find it hard to challenge. Remember how difficult it was for Britain’s subpostmasters to challenge much less complex and sophisticated software than that AI is bringing. And these new systems will be trained on data that have absorbed inherent biases from past decision-making.

Digitalisation promised once to move us towards greater transparency in the ways our lives are governed (I don’t mean just by governments). New AI systems will be less transparent, more opaque.

What should we learn?

What should we learn from this sad story and from this new trajectory within our digital society? Two things, I’d say.

First, not to believe that digital systems are necessarily more accurate or better at determining best outcomes for individuals, communities or all humanity. That may often be the case but it may also not. We should subject digital/AI outcomes to at least as much scrutiny as we’ve subjected human systems in the past – and always remember that some people have a vested interest in how they work which may not be the common good.

Second, to make sure we have institutions that are capable of interrogating automated decision-making and ways that enable individuals and communities to challenge how it is affecting them. Political parties, trade unions, community organisations, newspapers, courts and, yes, social media, have a part to play in this – but transparency and accountability will be lost if they don’t evolve to meet their new environment.

 

Image: Village Shop and Post Office by Colin Smith, CC BY-SA 2.0, via Wikimedia Commons

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.

 

Region: 
Country: 
« Go back