The EU's plans for its digital future

Image by NakNakNak used under Pixabay License (https://pixabay.com/photos/brussels-europe-flag-4056171/)

By APCNews

The European Commission recently presented a set of strategies to shape the bloc’s digital transformation. This policy package offers a roadmap for the EU’s plans on developing a single market for digital services and regulating platforms, and explains how the EU is approaching data governance and its ideas regarding specific artificial intelligence regulation.

Over the next five years, the Commission will focus on three key objectives: technology that works for people; a fair and competitive economy; and an open, democratic and sustainable society. The White Paper on Artificial Intelligence and the European Data Strategy are the first pillars of this new strategy for the so-called digital transformation of the EU.

Below, APC outlines our initial thoughts and positions on some of the areas covered in these EU digital strategies that will undoubtedly set a key precedent for global discussions on these issues.

On connectivity

The Commission’s communication on “Shaping Europe’s Digital Future” defines connectivity as the “building block for digital transformation”, and focuses on the need to invest in infrastructure and to scale interoperability, with 5G and future 6G appearing as key elements for digital growth.

While APC considers that connectivity and meaningful internet access are key priorities, there is an acknowledgement of the fact that both industry and governments have focused mainly on profitable 5G installations in competitive urban markets. If governments and companies do not change their strategies to encourage innovative complementary approaches, such as community networks, there will continue to be people with low or non-existent levels of access in rural areas. Another crucial aspect to consider is the increasing trend of investing in “machine-to-machine connectivity” instead of investing in connecting people.

On cybersecurity

The Commission points out that digital transformation has to start with EU citizens and businesses being secure, and announces plans to develop a cybersecurity strategy for the bloc in the future. The warning to be flagged here is that, as far as the Commission communication goes, there is not sufficient emphasis on the human rights implications involved in the development of such a strategy. We have seen in the past that these strategies pay little attention to the human rights implications of cybersecurity, jeopardising people, and in particular human rights defenders, marginalised groups and journalists, who rely on the internet, its integrity and confidentiality to exercise their human rights. If the internet is not secure, their ability to exercise their human rights and, in extreme cases, their personal security, can be under threat.

A human rights-based approach to cybersecurity policies, meaning a framing that puts people’s security at the centre, is needed, as well as a gender-sensitive approach that considers the differential threats people face in their use of ICTs because of their sex, gender identity and/or sexual orientation. While these approaches should be implemented in a systematic manner, comprehensively addressing the technological, social and legal aspects of cybersecurity, this is definitely not a given.

On artificial intelligence

Artificial intelligence (AI) is one of the pillars of this roadmap for the European digital future. The Commission presented a White Paper on Artificial Intelligence that sets the basis for a common EU framework for the deployment of AI, covering safety, liability, fundamental rights and data.

The White Paper proposes a “balanced approach, based on excellence and trust” and addresses the benefits of AI as well as the potential risks associated with the lack of transparency, gender-based and other kinds of discrimination, and the intrusion in private lives, among others. The paper seeks to lay the foundations for a common EU “human-centric” approach to AI aligned with EU law and fundamental rights, based on existing guidelines for trustworthy AI. As stated by Margrethe Vestager, the Commission executive vice-president who leads EU digital policy, the Commission “will be particularly careful with sectors where essential human interests and rights are at stake.”

The 2019 edition of APC's Global Information Society Watch (GISWatch) report focuses on the implications of AI from human rights, development and social justice perspectives with a specific focus on the global South. As Vidushi Marda of ARTICLE 19 stresses in the introduction to the report, the deliberations around AI are "profoundly political", which is why in this edition of GISWatch, we focused on jurisdictions that have been excluded from mainstream conversations around this technology, to contribute to a well-informed, nuanced and truly global conversation.

The EU white paper states that future regulation will focus on the so-called “high-risk” AI systems. This is AI that potentially interferes with human rights, such as biometric identification and other surveillance technologies. As stated in the draft proposal, these systems have to be tested and certified before they reach the EU single market, and their use should be “duly justified, proportionate and subject to adequate safeguards,” as well as transparent, traceable and guaranteeing human oversight.

While an early draft of the White Paper mentioned the proposal to impose a moratorium on facial recognition technology, Vestager stated during a press conference in Brussels that “a ban was never really the Commission’s plan.” At the same time, leaked internal EU documents reported on by The Intercept indicated that the bloc could in fact be creating a network of national police facial recognition databases.

It is a fact that data-intensive systems are being deployed around the world in ways that reinforce and exacerbate structural racism and inequality, in particular for people who are in positions of vulnerability and marginalisation. These systems make questions around consent for the collection, processing and use of data more important than ever, in particular, in terms of how the data may be used to restrict freedom of association and protests and, concretely, for people who are in a position of vulnerability. For this reason, and until human rights safeguards are in place, at APC we call for an immediate moratorium on the use of facial recognition technology in public spaces.

On data

The Commission’s draft digital strategy plan states that “we cannot talk about artificial intelligence without talking about data." The EU has developed a data strategy for the coming five years which includes measures that will “keep the EU at the forefront of the data-agile economy, while respecting and promoting the fundamental values that are the foundation of European societies.”

Based on the existing EU frameworks on personal data protection, open data, consumer protection and competition rules, among others, the strategy seeks to foster a legislative data approach that will contribute to “realising its potential in the data economy,” covering data governance, access and reuse (among businesses and governments and within administrations).

We agree that a focus on data protection is crucial, as privacy is essential to human dignity and enables the exercise of the right to self-determination, in particular for groups such as women or LGBTIQ persons. But as the business model that relies on the exploitation of massive flows of personal data involves both states and businesses, it is critical to create legal structures that address the responsibility of both actors with regard to respect for the right to privacy and remedies for victims of privacy violations.

On platform and content governance

Discussions on how to govern platforms and their role in propagating hate speech and other violent extremist content have increased in Europe over recent years. The EU’s communication on shaping the digital future states: “Some platforms have acquired significant scale, which effectively allows them to act as private gatekeepers to markets, customers and information. We must ensure that the systemic role of certain online platforms and the market power they acquire will not put in danger the fairness and openness of our markets.”

Even if platforms are not to be held legally liable for the content they make available, they do need to take responsibility and be accountable for their own actions to manipulate, rank, filter, moderate and take down content or users’ accounts, in line with the UN Guiding Principles on Business and Human Rights and the Santa Clara Principles on Transparency and Accountability in Content Moderation. APC advocates for a human rights-based approach to guide companies’ content moderation processes, not just in how they respond to requests for takedowns, but throughout the entirety of their operations. This approach should be guided by, among others, the principles of accountability, equality and non-discrimination, participation and inclusion, transparency, empowerment, sustainability, and non-arbitrariness.

A recent APC issue paper on EU responses to online content governance proposes a co-regulatory approach focused on the company processes rather than the actual content, a model with mandatory oversight by an independent regulator that will allow for scrutiny on design choices of platforms that allow the amplification of harmful content.

On the automation of labour

Although the risks for workers’ conditions caused by automation and platforms are covered in the EU's plans for the coming years, with a specific initiative on this expected by 2021, the new realities created by the increasing platformisation and algorithmic management of labour are not something from the future, but a current reality that finds its sustainability in the “hidden ghost work” involved in data cleaning, image labelling, text processing and content moderation, often outsourced and performed by back-end workers across developing economies, as addressed in a thematic report in the 2019 edition of GISWatch. Content moderation needs to be acknowledged as labour, ultimately feminised, devalued, and offshored, that needs particular attention.

On environmental sustainability

According to the Commission, “digital technologies are a critical enabler for the Green Deal, the EU's new growth strategy to become the world's first climate-neutral continent by 2050.”

The same document also states that digital solutions can advance the circular economy, support the decarbonisation of all sectors and reduce the environmental and social footprint of products placed on the EU market. The inclusion of environmental sustainability in the EU digital plans is welcomed, and sets a good example for current and upcoming digital policy discussions worldwide, where this issue has been mainly absent. Further research and the mobilisation of networks and social movements around this issue are also crucial to respond to the environmental crisis and for the advancement of environmental sustainability.

What now?

The documents mentioned above set a roadmap for the EU's digital policies for the next five years. Or, as Commission President Ursula von der Leyen described it, they propose European solutions in the digital age. These are presented as strategies, not as concrete proposals, but they will definitely establish a key precedent for other regions and countries to follow.

The Commission will be taking feedback on their proposals around data and artificial intelligence from civil society, academia and the private sector until May 2020. It is imperative that global South perspectives reach the EU as it agrees on its digital governance framework. As Vidushi Marda emphasises in her introduction to the 2019 GISWatch edition on artificial intelligence and human rights, the governance and politics around issues such as AI, data and many of the issues outlined above "suffer from fundamental structural inequalities" and, at present, jurisdictions from the global South do not form part of the evidence base on which the governance of these technologies is built. The imbalance in the global narratives around these technologies should be addressed: governance models on technology, Marda says, need to be driven in a bottom-up, local-to-global fashion that looks at different contexts with the same granularity in the global South as in the global North.



« Go back