How do you measure the extent and impact of something that’s pervasive, increasing in scope, and influential across all aspects of our lives? That’s a crucial challenge for policymakers, and it’s one they’re nowhere near resolving.
It’s a challenge, too, that matters more each day. Many of the assumptions about the digital society that were made in its early days have proved unreliable – unsurprisingly, as the pace of change has been more rapid than predicted. Policymakers now are being asked to make big decisions about the ways society will be impacted by technology with insufficient evidence about what has gone before, let alone what might lie ahead.
Building a better knowledge base is crucial if we’re to shape the digital society – to maximise its opportunities and mitigate its threats. Some thoughts on how we’ve tried to do that, on what needs to be included in that knowledge base, and on some ways to make it better.
How have we measured things?
There’s a paradox in the knowledge base about the digital society.
It’s built on a technology that measures things. The volume of data gathered on everything we do is growing exponentially (though not equally across all nations or all social groups). Algorithms that build on that volume of data are making more and more decisions about what happens to our lives. And yet there’s real uncertainty amongst policymakers and the public about the impact that digitalisation’s having. There’s a shortage of analysis on the big pictures that emerge from all those data bits and bytes.
Some things are easier to measure than others, and that builds bias into decision-making. It’s easier to measure the supply side than the demand side, access than usage, usage than impact, winners than losers. It’s easier to measure what makes society digital – the spread of new technology and services – than what digitalisation’s doing to society – in terms, for instance, of (in)equality, changing patterns of behaviour, environmental gains and losses, political stability.
And there are powerful vested interests involved. Knowledge is power, it’s often said, and data’s at the root of what we can now call a knowledge economy. Ownership of data’s therefore crucial, and that’s concentrated in corporations rather than in governments, more available for business decision-makers than those concerned with public policy and public services.
What has been measured?
Governments and corporations have different priorities when it comes to understanding what is happening and where we might be headed. My concern here is with public policy, and so with governments, those that seek to influence them and those whose lives will be affected by the decisions that they take.
Governments and international organisations have sought to measure some aspects of the digital society at least since WSIS (the World Summit on the Information Society held in 2003 and 2005). Most of this work has been quantitative.
Much of it has been concerned with the supply of digital resources – with infrastructure, with access, with the quantity of usage. This was true, for instance, of the ITU’s ICT Development Index which was published annually between 2009 to 2017. Measurements like it have been influential but have been hampered by the variable quality of national data. The ITU and others have struggled to build a more expansive and inclusive means of measurement that encompasses today’s more diverse digital experience.
The measurement of impact has been much more challenging than that of access or mere usage. WSIS set a range of targets in different development areas, though these too focused more on inputs (such as the number of computers in schools, or clinics with internet) than on outcomes (such as better health or learning).
Chapters in the Final WSIS Targets Review, published in 2014, illustrated the deficiencies in data then. There’ve been improvements since, and it is now possible to look in greater detail and much more sophistication at outcomes, at least in richer countries with well-funded statistical resources.
Good work’s been done in publications by UN and other agencies. But the tendency remains to measure what goes in more than what comes out of digitalisation; to seek out success rather than failure; to identify winners more than losers; to consider the short term rather than the long. As I discussed in a recent blog, the complex relationship between digital and social/economic/political aspects of (in)equality has been insufficiently considered.
Reports by vested interests also often gain far more publicity than peer-reviewed research.
The bigger picture
So: a great deal of data’s being gathered, especially by corporations, though much of it’s not being shared with those concerned with public policy. Analysis is often skewed towards the positive, emphasising achievements (which can be celebrated) and opportunities (which should be taken) but underestimating problems (which should be addressed) and risks (which should be mitigated). What’s needed to address this?
I’ll start with two big picture issues, which I’ll illustrate from projects with which I have been associated.
The first’s to understand the digital environment more comprehensively: to see it as a complex ecosystem in which many different facets interact. This requires much more than number-crunching. It needs to be seen as political economy, not just economy; as sociology, not mere statistics; as complexity, not as a compilation of separate factors that can be looked at individually.
I made an attempt to do this when developing an approach to understanding national internet environments ten years ago for the Internet Society. That needs updating now, mostly because of change that means we should consider now the wider digital environment rather than the internet alone, but it illustrates the kinds of complex understanding that I think’s required. Little analysis currently looks at the digital environment or stakeholder communities as comprehensively.
The second’s to bring together quantitative and qualitative evidence in order to build a more diverse base for understanding. UNESCO’s Internet Universality Indicators are focused on a particular range of policy objectives that are consistent with UNESCO’s mandate, but they exemplify an approach to building a collage of indicators from what is available, including qualitative evidence, rather than relying on things that are simplest to quantify.
Within that we need granularity. As I’ve already said, to understand digital development as fully as we should we need to use qualitative as well as quantitative evidence; to focus on the demand side (what users experience) as well as the supply side (what they are offered by government and business). But we need to do much more than this to build the framework for policy. Here are five ways in which our understanding of the past and present needs to be improved in order to enable us to understand potential futures and do more to shape them.
First, we need to consider impacts at least as much as, preferably more than, we consider inputs. Too many policy decisions have been made, and continue to be made, on the basis of assumptions about what new technologies ‘can’ do; they should be based on knowledge of what those technologies have done and evidence-based assessments of what is likely to happen as a result of future deployment and interaction with the wider environment (in all its aspects).
Second, they need to be societal. Data gathering on usage has tended to focus on how many individuals are connected and (in some cases) how those individuals use connectivity. More needs to be understood about the impact of changes in individual and corporate/collective behaviour on society as a whole – such as what can be inferred regarding long-term shifts in lives and lifestyles (in employment, for example, in urbanisation, housing, transport, social welfare, the appropriateness of health and education services to changing lives, etc.).
Third, they need to be disaggregated. Too many of the datasets used by policymakers are based on gross or average figures. But the experiences of different individuals, and of different social groups, have differed markedly within societies as a result of other factors, such as income, education, gender, race and class. Lack of disaggregation is why the impact of digitalisation on (in)equality has been so poorly understood. Assumptions that it would diminish inequalities of power and resources have been badly misplaced, resulting in mistaken policy decisions.
Fourth, they need to recognise the differences between countries. Indices too readily become league tables, with countries competing to appear better than their neighbours. This doesn’t help their governments to understand the underlying challenges they face, whether they're to do with (e.g.) access or with cybercrime. The same levels of internet access and usage can have very different impacts in countries with different political and social structures, levels of inequality and discrimination, legacies of law and regulation and belief. Conclusions made in one country don’t necessarily, or maybe even often, fit another; at least they need to be perceived in context.
And fifth, our data gathering and analysis need to look at trends. How things stand today, in terms of access or of usage, of e-commerce or of ‘online harms’, is insufficient to provide a base for policy. Longitudinal evidence, which shows how things are changing, and how the pace of change is changing, is essential for good policy development at the interface between digital and other fields of governance. Too often, it is lacking: a deficiency that needs to be addressed by governments, businesses and academia.
Data gathering and analysis
Data gathering and analysis are, therefore, both important. It’s not sufficient just to get good numbers, or to understand the limits of what those numbers tell us; qualitative as well as quantitative analysis is required, understanding the limits of our data sets and reaching across the boundaries between digital and other contexts. Many countries still have far too limited resources for these tasks.
The range of evidence that’s needed to understand the digital environment’s now vast. Some governments and regulators that have available resources put considerable effort into this. The UK’s regulator Ofcom, for instance, publishes an annual report on the UK as an Online Nation, focusing on a few aspects of what’s happening. It conducts other substantial research, but even so the range of what it knows is limited, and almost certainly much less than what is known about the UK market by Facebook/Meta, Alphabet/Google, Amazon and co. More data sharing by corporations, mandated in the public interest rather than vouchsafed by vested interest, would be beneficial.
In 2025, the United Nations is due to review ‘progress’ towards an ‘information society’ in the twenty years since WSIS. That ought to shape global policies towards the future, aligned in some way (it’s presumed) with the Secretary-General’s Roadmap for Digital Cooperation. If the UN or others are to understand that properly, we need much more effective data-gathering and analysis. And that needs more resources, especially in those countries that lack experienced statistical institutions or the funding they require.
So here’s a paradox for the digital society. It has enabled exponential growth in data volumes and the means by which those data can be analysed at scale. This is the basis for AI and many hopes for better futures (as well as fears for worse). Yet policymakers don’t have the breadth of evidence or depth of understanding that they need to make the most of opportunities and mitigate the worst of fears. That paradox needs looking at.