Metaphors on artificial intelligence help us think through threats to human rights

Image: Illustration by Ellena Ekarahendy (@yellohelle) for APC.

Publisher: APCNews    

Artificial intelligence (AI) is now receiving unprecedented global attention as it finds widespread practical application in multiple spheres of activity. But what are the human rights, social justice and development implications of AI when used in areas such as health, education and social services, or in building “smart cities”? How does algorithmic decision making impact on marginalised people and the poor?

The 2019 edition of Global Information Society Watch (GISWatch) provides a perspective from the global South on the application of AI to our everyday lives. It includes 40 country reports from countries as diverse as Benin, Argentina, India, Russia and Ukraine, as well as three regional reports. These are framed by eight thematic reports dealing with topics such as data governance, food sovereignty, AI in the workplace, and so-called “killer robots”.

While pointing to the positive use of AI to enable rights in ways that were not easily possible before, this edition of GISWatch highlights the real threats that we need to pay attention to if we are going to build an AI-embedded future that enables human dignity.

APC collaborated with Indonesian illustrator Ellena Ekarahendy to produce a set of visual representations of some of the most outstanding metaphors that the GISWatch authors used in their reports while analysing the implications of AI on human rights, social justice and development. 

The elephant in the room

The claim that AI, while shedding menial and repetitive jobs, will create a newly skilled and re-employable workforce currently lacks evidence to support it. This is the “elephant in the room” Deirdre Williams writes in her regional discussion on the Caribbean: [W]hile there is also insistence that the same new technology will create new jobs, few details are offered and there is no coherent plan to offer appropriate re-training to those who may lose their jobs. (Country and regional report introduction, by Alan Finlay)

The largest employer in many countries of the Caribbean is the government. While digitisation and AI may lead to less long-term storage of paper and a cleaner work environment, 21st Century Government tends to create redundancy. The “elephant in the room” at public discussions of the new technology is the threat of unemployment. And while there is also insistence that the same new technology will create new jobs, few details are volunteered and there is no coherent plan to offer appropriate re-training to those who may lose their jobs. (“It's not just about putting an app in the app store,” Caribbean report by Deirdre Williams)

Guinea pigs

Artificial intelligence (AI), algorithms, the “internet of things”, smart cities, facial recognition, biometrics, profiling, big data. When one tries to imagine the future of big cities, it is impossible not to think about these terms. But is the desire to make cities “smarter” jeopardising the privacy of Brazilian citizens? Does this desire turn people into mere guinea pigs for experimentation with new technologies in a laboratory of continental proportions? ("We don’t need no observation: The use and regulation of facial recognition in Brazilian public schools," by Mariana Canto)

Killer robots

Countries including the United States (US), South Korea and Russia are investing in AI technologies for use in lethal autonomous weapons systems (LAWS), also dubbed “killer robots”. LAWS are distinguished from other forms of AI-enabled warfare, such as cyberwars, which are not directly lethal. The concept of autonomous weapons is also not new. The landmine is an early example of an autonomous weapon, a device that is triggered autonomously and kills without active human intervention. But the use of AI brings such weapon systems to an entire new level, for it allows machines to independently search, target and/or eliminate perceived enemies. ("The future is now: Russia and the movement to end killer robots," by J. Chua)

A key policy problem raised by several authors is the question of legal liability in the event of a “wrong” decision by an algorithm (or, in extreme cases, so-called “killer robots”). If this happens, it is unclear whether, for example, the designer or developer of the AI technology, or the intermediary service provider, or the implementing agent (such as a municipality) should be held liable. One solution proposed is that algorithms should be registered as separate legal entities, much like companies, in this way making liability clearer and actionable. (Country and regional report introduction, by Alan Finlay)

Trojan horse

Is the advent of artificial intelligence (AI) the panacea for many of the ills of the developing world, or is AI a Trojan horse to facilitate invasion, the small-pox blankets of the new colonialism? The stories of the horse and the blankets exist as a part of the human story. The Trojan horse was built by the invading Greeks, and filled with their soldiers. Then the Greeks simply waited for the Trojans to come for the horse and drag it into their besieged city. Victory from inside the city walls was easy. The blankets, infected with smallpox, and offered as gifts (as the Trojan horse had been), assisted in the colonisation of the so-called “New World” by removing indigenous inhabitants who objected to the invasion. The fact that we no longer remember our history makes us vulnerable, as what happened before can easily happen again. (“It's not just about putting an app in the app store,” Caribbean report by Deirdre Williams)

Do you feel inspired to keep reading?

Download a full copy of the report in PDF format

Order a printed copy of the book

Download ebook [MOBI format]

Download ebook [EPUB format]

 

Like these designs by Ellen Ekarahendy and want to print them as stickers for your laptop? Download the high resolution images here:

Elephant | Guinea Pig | Trojan Horse | Killer Robot

 

More information:

 



« Go back