Page last updated on
The digital revolution has a complex relationship with privacy.
It depends fundamentally on leveraging data on people and everything they do. That’s where the ‘magic’ of the digital society resides – whether that’s recommending music tracks or analysing epidemics, driving automated vehicles or monitoring their owner’s personal lives or political connections.
And, equally, thereby, it threatens privacy: the right of individuals to do things on their own terms without others knowing or sharing them if they don’t want.
Opportunity and risk
This tussle between digitalisation and privacy’s become increasingly important lately, as growing concern about the risks of authoritarian surveillance and what Shoshanna Zuboff calls ‘surveillance capitalism’ have matched growing awareness of the potential impacts of new technology on economy, society and culture.
It’s become especially immediate as a result of the coronavirus crisis. What degree of oversight of individuals’ data should governments exercise in order to protect individuals’ health and welfare? What do individuals think of that, individually and collectively? What protections are required to ensure core principles in rights discourse concerned with necessity, proportionality and impermanence.
This week and next I’ll write some thoughts around this. This week, on five big picture issues that I think underlie the discussions that we need to have. Next week, some more immediate considerations round the current crisis.
The two-edged sword
The two-edged sword of data aggregation’s been inherent from the start.
The opportunities and, especially, the risks featured often in literature and cinema from early in the twentieth century – see, for example, Fritz Lang’s film Metropolis, George Orwell’s novel 1984, the dystopian fiction (and resulting films) of Philip K. Dick or the more benign machine-led universe of Iain M. Banks’ ‘Culture’ novels.
Global optimists were captivated by the opportunities of digitalisation around the turn of the century, most obviously at the World Summit on the Information Society, because they seemed to offer opportunities to address apparently intractable problems.
Those who warned that there were also risks involved – that datafying everything was inherently authoritarian rather than libertarian – were dismissed as gloom-mongers, even ‘luddites’ who were opposed to ‘progress’.
It was hard to get a hearing then for a more nuanced view, that paid as much attention to potential risks as well as opportunities. That more nuanced view has really just come to the forefront this last five years or so.
But many of the parameters of the digital society have now been set. The opportunity to build risk assessment into those parameters was missed. Can it be re-asserted now?
What are my five big picture issues?
1. Privacy and rights
Privacy’s included in the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights, but the wording’s very limited. ‘No one’, says the Covenant, ‘shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence.’ But that’s it; there’s no further definition.
Of course, there’s been a huge amount of work since then to flesh this out - by the Human Rights Council, by UN special rapporteurs, by lawyers and governments and activists, including much around digital privacy, but the lack of definition in the basic instrument’s significant, not least because it legitimates big differences in different countries’ constitutional and legal frameworks.
Plus – and this is especially relevant today – Article 17 of the Covenant, which references privacy, is subject to Article 4: ‘In times of public emergency which threatens the life of the nation …,’ governments may ‘take measures derogating from their obligations’ under it. Governments are allowed by the Covenant to limit other rights as well, including expression, assembly and association, for ‘the protection of public health.’
I’ve no space to explore interpretations here, but will come back to the implications of the current crisis next week.
2. Digitalisation has changed the nature of privacy
The meaning of privacy has, in any case, changed fundamentally, as a result of digitalisation, since the international community agreed these instruments.
When they were agreed, we could treat privacy as the norm. Personal information belonged clearly to the individual. Anyone else who wanted it had to seek it out, by asking or by searching, legitimately or illegitimately. Businesses did market research rather than using algorithms to search customers’ data. Authoritarian governments paid informants rather than deploying spyware.
Today, because of digitalisation, the opposite is true. Data are now gathered by default on any activity that’s capable of being digitally captured – and as the digital revolution proceeds, that increasingly means almost any activity at all - certainly for those who live (and benefit from) highly-connected lives, but increasingly, too, even for those whose lives are less-connected.
This fundamentally changes the nature of privacy, and it fundamentally changes the basis for privacy’s protection. Withholding data is no longer feasible in the digital society. What matters for protection now is what is done with it, by whom, for how long, with what impact, with what security, and how that is accountable. Old laws, based on old assumptions, aren’t enough for this. New ones are needed that recognise what’s happened.
3. Who has your data has been changing
A second fundamental change regarding personal data concerns who has most hold of them.
Governments accumulate data on individuals, as individuals and aggregated, for different purposes, which haven’t fundamentally changed.
All governments need data on individuals in order to gather taxes, allocate resources, deliver public services, enforce the law and enable interventions in policy and practice like anti-discrimination laws.
Some governments exploit data in order to control individual behaviour, restrict civil and political rights, prevent industrial or political unrest, suppress minorities, etc.
These two aspects might be described as the opportunities and risks of government relationships with data. Datafication has greatly increased the potential for both of them.
But (with a few exceptions) governments are not, these days, the principal owners of data regarding individuals in their countries. Those principal owners now are commercial businesses, some domestic but also global corporations that harvest and leverage data generated by people’s and organisations’ use of online services including social media. Amusingly, these sometimes claim to be protecting individual privacy against government intrusion while intruding more deeply into individual activity than almost any government.
4. Automated data use
A third fundamental change regarding data and privacy concerns the ways in which it is now used.
In the past, exploiting data was more difficult and expensive. It took a lot of money and resources – human and computing – to analyse even quite small data sets. Researchers used file cards instead of databases. Authoritarian governments needed huge numbers of staff to monitor even small numbers of dissidents. East Germany’s secret police, the Stasi, had 100,000 staff and near to 200,000 informants in 1989 to monitor just 16 million people.
Now, massive data sets are routinely analysed, not just on their own but in combination with one another, in quantities and ways and details that were entirely impossible before today’s computers were available. Artificial intelligence, machine learning and complementary technologies will increase these quantities and ways and details more and more during the next decade.
This step change in capability exacerbates the shifts in data gathering described in 2. and 3. above, for two reasons, whether the data concerned are leveraged for public interest (better medical monitoring and treatment), commercial advantage (selling widgets), or state control (surveillance).
Far more can be deduced about an individual from any data held about them (and from combined data sets) than could have been deduced before.
Far more can be done with information derived from those data, and far more powerful decisions can be made, by automated processes – algorithms – without human intervention or involvement.
5. Can there be effective controls?
My final issue's concerned with regulation. The four issues I’ve described above, it seems to me, require a different way of thinking about privacy and data. Old ways of regulating aren’t going to be sufficient.
As usual, the aim should be to promote what’s beneficial in the public interest, protect what’s valuable to us today, and prevent what would be harmful.
Data protection and data privacy are being taken seriously by many governments (not least because of cybersecurity risks). Europe’s General Data Protection Regulation (GDPR) is often seen as the current global standard for protection of personal data. It ramps up historic norms of privacy for digital environments. It’s questionable, though, whether ramping up’s enough to deal with the transformation of privacy described. And many governments are nowhere near even that stage of protection.
I’ll end with three juxtapositions that are influencing outcomes.
Individuals may express concern about the loss of privacy, but most are not concerned enough to act in ways that protect their data against intrusive data-gathering, especially by commercial services they like or value.
Data corporations may profess commitment to user privacy, but they’re adept at exploiting opportunities to maximise the commercial value that can be leveraged and will continue to find innovative ways to bypass regulations.
Governments may sign international agreements to protect privacy rights, but these are likely to constrain only democratic governments, not those that are authoritarian.
Next week: some thoughts around the data and privacy implications of the corona crisis.
Image: Data Security Breach, by Blogtrepeneur via Flickr Commons