Inside the Information Society: Does Europe’s GDPR change privacy and data protection?

In May, Internet users in Europe were flooded by emails from organisations telling them of changes in their data protection arrangements or asking them to renew consent to hold and use their data.

Was this spam? No. It was the result of the General Data Protection Regulation (GDPR) – new European Union (EU) rules to govern how governments, businesses, charities and others can use the data individuals provide them with, and the inferences that they make from them.

First (European readers, please bear with me; others aren’t familiar) a brief description, followed by some questions about what may happen next, and thoughts on two big issues – whether the GDPR will affect data protection globally as well as within Europe; and whether it will shift the balance between different styles of regulatory practice.

GDPR – the what and why?

The GDPR places tougher restrictions on what organisations can do with people’s data than those that previously prevailed in Europe and that exist in other jurisdictions.

  • ‘Data controllers’, for example, now have to gain data subjects' (users') explicit consent to how their data’s used or shared. They can’t pre-select check-boxes, for example, in ways that imply agreement.

  • They must give data subjects access to data that they hold on them, and to the inferences and profiles that they’ve made using those data – if users so request.

  • They must provide them with their data in machine-readable form and allow them to transfer their data to a competitor.

  • They must allow users to erase some of their data, provided there’s no public interest override.

  • They must inform them of data breaches that may put them at risk.

These new rules apply – importantly – not just to organisations based in the EU’s 28 member-countries but to organisations that hold data on people who live in those countries. This includes data held about such people by online service providers and data management companies like Google, Amazon and Facebook. Silicon Valley needs to change its ways in order to comply just as much as businesses that are physically based in London, Rotterdam or Prague.

What’s more, huge fines – up to 4% of global turnover – can be levied on them if they fail. That’s enough to get attention from the richest online enterprise.

Privacy by design

There’s an important shift here in the legal frame for data management.  I’ve argued in an earlier post that digitalisation requires us to change our understanding of how privacy – which is established but not defined in the International Covenant on Civil and Political Rights – can be respected.

Before digitalisation, it was possible to secure privacy by preventing data being gathered. In the digital era, that’s no longer feasible. Data are gathered by default on almost every aspect of our lives; that’s how the systems work on which we now increasingly rely – and on which we’ll rely even more as the Internet of Things, machine learning and other innovations spread.

In the digital age, therefore, protecting privacy has to depend on restricting how data are used more than restricting data-gathering.

The GDPR moves in this direction by introducing what’s called ‘data protection by design and by default’. Organisations should design and implement systems in ways that automatically keep data private unless users authorise them not to (or where public interest overrides apply). In this way, it’s hoped, users will gain much more control – if not quite ownership – over data that are held about them.

Why does this matter?

This is important because, we know, people underestimate the extent to which governments and (especially) businesses hold data on them and the extent to which those data are used to manage their relationships with governments and businesses. We don’t read terms of use agreements when we sign up to them, which means they've more effect as liability disclaimers for data controllers than privacy protections for data users.

Most people, in other words, don’t act to protect their privacy (and, thereby, the privacy of those they interact with). If we want privacy to be protected, protection’s best located in the rules that govern those that gather and use data.

How’s this gone down with businesses and users?

All this, then, is intended to extend the rights of data subjects and impose responsibilities on those that gather, manage and process their data, use it to build up user profiles, and share it with others.

Big Tech businesses lobbied against it, disconcerted at the limits placed on data exploitation and startled by the level of potential fines. They failed to win the day, but it’s the biggest businesses that are best placed both to comply and to find and exploit loopholes. Court cases with big companies are going to take years.

Smaller businesses, charities and other organisations have been alarmed by new burdens and many have struggled to make sure they are compliant. Hence the flood of emails into European inboxes. Some will get into trouble, but I’d expect legal jurisdictions to trouble those who deliberately flout the rules rather than those who simply make mistakes.

Most ‘data subjects’ are likely to continue as they have – accepting free services enthusiastically, worrying a bit about data exposure, insufficiently aware of how data are being used - but now a bit more privacy-protected and more able, if they wish, to find out what it is that others know about them.

What happens in practice, I’d say, will depend on what happens in practice (this sentence is written as intended). Early test cases may determine a good deal of impact. If the law looks enforceable to data businesses, they’re more likely to comply more fully. If it doesn’t, not. A scandal over data misuse may swing opinion and change users’ behaviour. There may be contests between jurisdictions. Hard, therefore, to predict, and important to observe.

There are two global issues arising from this that I’ll close with.

Extraterritoriality and the Brussels effect

The first’s concerned with extraterritoriality.

These regulations apply not just to EU businesses but to businesses that deal with EU citizens and others who live in EU countries. Big Tech/Data businesses like those in Silicon Valley have had to change their ways in order to comply with them. (A few US-based businesses have chosen to opt out of European users, at least for the time being.)

But there are greater global implications.  Could the GDPR become a global standard?

The European Union’s such a large market that global businesses have to engage in it. The question for a company like Facebook, say, is therefore whether to have different rules for its users in the European Union and for those elsewhere, or whether it is simpler to adopt the European standard where everyone’s concerned.  The tendency for EU standards to become global because of this, in diverse economic sectors, is sometimes called the "Brussels effect" (after the city where the EU's based).  It matters in particular here because the privacy standards in the GDPR are significantly higher (and so more restrictive) than those in the United States where the biggest data businesses are based.

Europe’s one of three jurisdictions that’s large enough to have potential regulatory power over global businesses, at least where its own territories are concerned and potentially more widely (see earlier European attacks on Microsoft’s exploitation of its market dominance). There are, I’d say, three models now regarding privacy online: Silicon Valley’s (which emphasises commercial freedom), Europe’s GDPR and China’s (which emphasises state authority). Will one of these prevail – and, if so, which?

Privacy and innovation

The other global issue concerns the relationship between different regulatory approaches. I’ve written recently about the need for governments and businesses to rethink the relationship between ‘permissionless innovation’ and ‘the precautionary principle’.

Permissionless innovation here allows businesses to push the limits of data exploitation. If they go too far, they might be hauled back by governments and laws, but reversing innovation’s difficult at best. The precautionary principle implies that innovation - in this case in how data are exploited - should be constrained by certain principles or prior tests – for environmental impacts, for example, or for rights compliance – before they are deployed.

The boundary between innovation and privacy is increasingly contested, and will be more contested still as the Internet of Things and machine learning become widespread.

The GDPR shifts the goalposts here, it seems to me, at least some way. Privacy by design and by default’s essentially precautionary. It seeks to constrain what businesses (and governments) can do with the data that they have – not just to enable users to exercise constraint but to make constraint, rather than data exploitation, the default.

Whether it will achieve this in practice, we’ll have to see. Businesses (and governments) will take advantage of loopholes wherever they are found. Enforcement may prove difficult, with lengthy test cases while questions remain unresolved. New technologies will enable regulations to be bypassed. But anxiety about the relationship between privacy and innovation will not go away and cannot be ignored.

Photo credit: Convert GDPR

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.
NOTE: This blog is on its annual Northern summer / Southern winter break, and will return in September, with more thoughts on the Information Society. 
Region: 
« Go back