Algorithmic transparency and the right to explanation: Transparency is only the first step

The session on "Algorithmic transparency and the right to explanation" took place on the first day of the Internet Governance Forum (IGF) 2018. There was a great deal of expectation around this workshop, because algorithms and the right to explanation is a hot topic with numerous concerns but not many answers. The room was crowded – there were even people who listened to the session while standing because there weren’t enough seats for everyone. Everybody wanted to hear what the experts had to say regarding algorithms and transparency. The panel was composed almost entirely by women, something positive that we should highlight, because often this kind of panel is made up solely by men – a “manel” – without any consideration for gender balance.  

The session started with the question: “Why do we need transparency on algorithms?” A first and quick answer could be because the European Union’s General Data Protection Regulation (GDPR) dictates that whenever personal data is subject to automated decision making, people have “the right to obtain human intervention on the part of the controller.” That is, the right to explanation. People have the right to know what happens with their own personal data and to understand how companies will use it.

A second reason why we need transparency is because automated decision making is inextricably connected with the context of those decisions, and can potentially reproduce or even exacerbate injustices. Although artificial intelligence (AI) can serve useful purposes, it also raises significant concerns.

“AI is not the solution yet,” Karen Reilly highlighted in the session. “It can address a lot of issues, but unless we have a multidisciplinary project to ask ourselves, what do we actually want to build, what kind of teams do we want to see before we apply this, it's just going to be new technology reinforcing age-old systems of oppression.”

In light of this, transparency is a tool to understand AI’s implications: where its boundaries can reach and what consequences AI can cause in specific contexts. Humans may discriminate against people because of their skin colour, gender, religion or other grounds, and therefore, if there is bias on the part of developers of AI or regarding how people run their own companies, it is possible to create an entire system of discrimination with AI.

Reilly highlighted that no matter whether Howard University has excellent researchers, in Silicon Valley the developers are still white men; the same issue applies when it comes to gender discrimination. “Google employees walked out over their handling of harassment and horrible, horrible things including gender-based violence”, she said. Finally, she highlighted how Amazon’s recruiting system, using AI, showed preference for white men.

To avoid those problems, it is crucial to develop multidisciplinary projects that approach the issue from a human rights perspective and consider possible bias and/or discrimination. In a recent report addressing the implications of artificial intelligence technologies for human rights, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, stressed the need for not only human rights impact assessments, but also public consultations on the design and implementation of AI systems, which must include civil society, human rights groups and relevant local communities.

In this regard, as Reilly pointed out, “Discussions with those activists will make you feel uncomfortable. If you want to solve some of these problems you will have to talk to activists who make you uncomfortable.”

In the end, there was not a consensus on the panel regarding the necessity to develop mechanisms that include other actors on automated decision making, particularly if personal data and human rights are involved. Indeed, there were questions regarding how to address new ways of assessing algorithms, but the panel did not have a satisfying answer to that.

Nonetheless, the panel concluded that addressing transparency on automated decision making is only a first step. As Lorena Jaume-Palasi stated, “Transparency is not an end by itself. We're at the very beginning of a conversation. We started with transparency, and right now, we're trying to identify what we mean by transparency, to whom and for what purposes.”

Although the panel put interesting points on the table, it seems that the conversation around algorithms, transparency and accountability will continue to be a challenging one.

Photo: Boudewijn Bollmann, used under CC BY-NC-ND 2.0 licence (https://flic.kr/p/26Gckuq)

 

Region: 
APC-wide activities: 
« Go back