Inside the Information Society: Measuring the Information Society

There’s a lot of talk about the need to measure the Information Society. Governments, businesses and UN agencies all want to know who has access to what information and communications. They want to know when and how that access is obtained and used, and what impact it’s having on our lives.

You might think this would be easy. After all, the Information Society’s a digital phenomenon. Data on what we do with phones and PCs, on apps and social media, are gathered by default. They’re used to help us make the most of them (and maximise the value of our use to their suppliers). The volume of data that arises from our digital activity is doubling every couple of years.

Four questions

But it isn’t quite as simple as it seems. The Information Society may be built on data but the data that we have to measure it are weak and insufficient. Four questions in this post:

  • What do we need to measure?

  • Why do we need to measure it?

  • What’s the problem that we have in doing so?

  • And what can we do to make things better?

What do we need to measure?

First, we want to know what’s happening. ICTs are rapidly pervading our communications cultures. If we’re to maximise the positives and mitigate the negatives from this, we need to know who’s using what, when, how and where. Not just raw numbers but disaggregated numbers – how many women and how many men, old and young, poor and rich, illiterate and literate; by country, district and community.

And, second, we need to know what’s happening as a result of what is happening. We need data not just on access and use, but on their impact on economies, societies and cultures, including other economic sectors – including issues like equality, inclusiveness, empowerment, employment. We need trend data that helps us to look forward as well as back. And we need data that identify unanticipated outcomes as well as those that we’ve looked forward to or feared.

Why do we want to measure it?

We need to understand what’s happening if we’re to shape the future rather than letting it be shaped for us by technology and markets.

Too much policymaking for the Information Society has been based on faith and guesswork. Evidence that’s accumulated in the last few years has improved the quality of policy, adding more subtlety and understanding, but there’s still too much evangelism and too little realism in visions of the online future. The more evidence we have, and the better quality that evidence, the more we’ll understand and the more capable we'll be of making sound decisions (whatever those may be).

Different stakeholders, different interests

Different stakeholders have different aims here. The Internet’s big corporations already know much more than other stakeholders about what’s going on. They have masses of data and the capacity to analyse it with fine granularity. For them, the goal of measurement’s commercial: profit maximisation and shareholder value; building market share; ensuring they’re ahead of their competitors in understanding where the market’s moving.

Other stakeholders have different objectives. Governments need data and the understanding it can bring in order to integrate policies for the Information Society with other economic and social programmes, to leverage developmental value, to combat cyberthreats, to raise revenue, et cetera. Non-Internet businesses have their own (non-Internet) commercial goals. Civil society organisations also have their own objectives. Better data should (but don’t necessarily) lead to better understanding for all of these.

What’s the problem?

Ironically, though, the data that we have on digital development’s quite poor. This hampers policymaking and raises risks that it will prove misguided. Why so poor? I’ll give four reasons (but could cite more).

First, good data are difficult and costly to acquire. National statistical offices in most countries are under-financed and under-resourced. This is especially so in Least Developed Countries. They often rely on data from third parties (such as businesses), which have vested interests in the numbers they supply (and withhold data that they think has commercial value or might help competitors). Household surveys, which would help to measure ICT access/use/impact, are expensive. Most countries can’t do them often; many don’t do them at all.

Second, even good data rapidly go out of date, especially when things are changing fast as is the case with ICTs. Many numbers that are widely quoted in the literature and by policymakers are either estimates or derived from small surveys with diverse methodologies. Even at the time they are collected, they’re indicative rather than definitive. After two years, they’re as likely to mislead as to inform.

Third, the data that we have are poorly disaggregated. We should be as concerned about equality and inclusiveness as we are about gross numbers. Growth in the latter can mask problems with the former. Rapid growth in Internet access and use in some countries, for example, seems to have been accompanied by growing gender inequality (perhaps because men can more easily afford the necessary devices).

Fourth, from a development perspective, impact matters more than connectivity. Impact’s rarely instant, and is much harder to monitor and analyse, not least because it’s unpredictable and often unanticipated. Big data analysis doesn’t necessarily help here, especially if it’s based on social media, because some groups – the young, the rich, the better-educated – are over-represented in big data sets. Much more, and more timely, thought should be going in to how to measure impact.

What is to be done?

So, on the one hand, we need more and better data; and, on the other, we should not rely too much on what that we have. What can we do to improve things?

Be sceptical

First, we should be more sceptical than we have been. It’s better, sure, to have (accurate, reliable, up-to-date) data than not to have them. But data are not perfect and should not be considered so. They are evidence, not proof. We should always question their reliability, the motives of those providing them, and the interpretations they’ve been given. We should look for nuances within them and the questions that they raise rather than accepting their face value. We should question their timeliness and consider what they say of trends.

Be more transparent

Second, we need to improve the quality of data. This requires investment in national statistical systems, but we can triangulate as well. We can get a better picture of Internet access or use, for example, by comparing survey data with those from service providers. It’s in everyone’s interests – ISPs and other online businesses as well as national governments and NGOs – for companies to share information rather than hoard it in their narrow business interest. They should voluntarily do so. If they won’t, regulators should consider how to make them.

Disaggregate

Third, the most obvious conclusion's rarely the most interesting or useful. Of course, it’s worthwhile knowing that the number of broadband subscribers in a country of, say, fifty million people has gone from ten to twenty million in the last three years. But it’s much more useful from a policy perspective if we know how that increase is distributed between geographical and social groups, genders and generations. It’s much more useful if we know how often people are making use of access, what services they’re using, for what purposes, and what impact (they think) access and use are having on their lives and livelihoods.

Look to qualitative evidence

Finally, we should not rely too much on numbers. Statistics don’t provide the truth, the whole truth and nothing but the truth. Many aspects of the Information Society we wish to measure aren’t susceptible to quantitative evidence. Think, for example, of issues around human rights, the quality of employment, or the nature of decision-making. Qualitative evidence is equally important in many issues, and more important in some aspects of policy and governance.

Policymakers need more than numbers; they also need credible reports and credible analysis of what’s happening around them. UNESCO’s Media Development Indicators offer an interesting approach here: a collage of many indicators – some quantitative, more qualitative – covering different aspects of the media environment. They require time and effort but they identify more of the issues that require policy intervention than mere league tables of statistical achievement.

Image by Sonny Abesamis used under Creative Commons license.

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.
Región: 
« Volver