Skip to main content

This article was rebublished from EngageMedia.

This article is the last in a series on the ethical principles and guidelines of artificial intelligence (AI), as well as their shortfalls and the search for alternative frameworks.

In Parts One and Two of this series on the ethics of artificial intelligence (AI), we dismantled the assumption that we can rely primarily on ethical guidelines for industry self-regulation, based on two perspectives: the substance of the ethical documents and the difficulties in putting principles into practice. The content of ethical documents were found to be narrow and limited in scope, while the implementation of beautifully crafted principles and guidelines proved to be difficult without the teeth of legal regulation.

Is that then the end for ethics in AI? That may too hasty of a conclusion. Luciano Floridi1 points out that self-regulation cannot replace legal regulation, but it can still be a valuable tool for AI governance in situations such as when legislation is unavailable or when legislation is in need of an ethical interpretation. Sometimes, there are also situations when we need to make a judgment call if it is better to do (or not do) something, even if it is not illegal to do (or not do) it, such as improving labour conditions for workers in the gig economy even if it is not yet legally required.

Therefore, in this last part of a critical view of AI ethics, we will explore some studies and thought experiments of how ethical frameworks from other cultures and traditions can be brought in to scrutinise AI design, deployment, and use. The main point of this post is to explore existing alternatives of determining right and wrong, extending the discussion beyond what is available/used now in the tech industry in the West, focusing on the ways of thinking, and providing some examples of how they can be applied.

Challenging the universality of current AI ethical thinking

Scholars have drawn from diverse philosophies such as Confucianism (East Asian), ubuntu (African), or indigenous epistemologies to question the universality of the prevalent model of AI ethics and propose alternatives they consider to be better. In this blog post, we will not have enough of space to do justice to the individual philosophies, and can only pick and choose some points here and there to provide some ideas. However, just acknowledging that alternatives exist provides some level of empowerment – that it is not a choice between poor and nothing – but that we can take one step outside the box and look at other offerings, perhaps even from our own cultural knowledges and wisdoms in the context of culturally diverse Southeast Asia.

There are certain commonalities in the considerations of these alternative philosophies in contrast with what we have seen as current AI ethics in Part One, which is mainly drawn from a Western tradition of ethics. We will tackle three of these alternative philosophies as follows:

  1. A different conception of “personhood”, or what it means to be a person or a human

  2. The importance of context, where ethical judgments are situated in their time and place, with nuances that may change the judgment

  3. The importance of the relational aspect, between human and human, human and other beings, and human and AI

In the following sections we will look at these various points that help us understand some alternatives on how to do AI ethics better.

How to be good, how to be human

In Thilo Hagendorff’s2 critique of AI guidelines, he points out that there are different strands of ethics theory, and “deontology”, which emphasises codes of conduct that lay out rules and duties (precisely our lists of ethical principles and guidelines), may not be the best route to take in terms of AI ethics. Instead, he points at another strand of ethics theory, “virtue ethics”, which focuses instead on the moral character of the individual (developer, tech company, user, and more) and not the technology itself. Virtue ethics goes beyond ticking the boxes, and can be seen “as a project of advancing personalities, changing attitudes, strengthening responsibilities and gaining courage to refrain from certain actions which are deemed unethical” (pg. 112).

In other words, what would a good person do? And, further, what does it mean to be a good person or, indeed, a good human? Sabelo Mhlambi3 states that the traditional Western view of personhood is based on rationality and that “truth could be rationally deduced through formal rules of logic” (pg.1). This philosophy is inherently individualistic (humanness as the individual’s ability to arrive at the truth by logical deduction), and has motivated modern computing to build a machine that would match or surpass humans in reasoning or rational thinking. In terms of ethical behaviour, Mhlambi goes on to argue that the pursuit of rationality in such an interpretation has justified a host of dehumanising actions such as colonisation or racial subjugation of communities that are deemed not rational enough and hence not human enough. Endless economic growth and accumulation of capital also came part and parcel of what is considered as rational, leading to and justifying centuries of widening inequality that has now been carried forth to the digital era in the form of data colonisation and surveillance capitalism.

Watch this TEDx Talk by Getrude Matshe to learn more about the ubuntu philosophy.

A person is a person through other persons

In contrast, ubuntu, which is the basis of African philosophy, defines a person from the point of view of social relationships, as “fundamentally relational”, where “a person is a person through other persons” (Mhlambi, 2020, p.3)3. From this departure point, ethics is considered from the point of view of one’s relationality with other persons, their non-human counterparts, and the environment in general. One is recognised as a person or human only when they meet the responsibility of being humane to others, or when they improve the quality of the interconnected relationships that they are part of. Social progress is seen from the perspective of social harmony, which is in turn seen in terms of human dignity.

How would this be applied in the context of AI? Mhlambi critiques automated decision-making systems (ADMS) with the lens of ubuntu, and argues that ADMS can be flawed in five ways which violate the ethics of ubuntu:

  1. When they exclude marginalised communities in their design

  2. When they exacerbate current social and racial biases

  3. When they fail to recognise the interconnectedness of society

  4. When they commodify our digital selves, and

  5. When they centralise data and resources in the hands of a few and enable them to inflict harm on the rest of society.

As we have explored in Parts One and Two, some of these aspects have completely been overlooked by the AI ethical principles that we have in place today, which argue that a system can be deemed ethical if it is fair, accountable, transparent (among other principles that focus on narrow fixes)– without taking into account the power dynamics and social relationships within the system.

Ethics as contextual, and a constant process of negotiation

Confucianism is another tradition that considers virtue in a relational way. Widely influential in East Asia, this philosophy originated from the teachings of Chinese philosopher Confucius (551-479BCE). Similar with ubuntu, Confucianism believes that humans are fundamentally interdependent, and one can only mature as a human in relation with wulun, the five types of social relationships that they are in: parent-child, sibling, husband-wife, ruler-minister, and friendship. One is not considered a proper human if they do not fulfil their duty and obligation within their social role. At the same time, they have to consider dao, or the right way to do things – dao of the heaven which is the principle that organises and governs the universe and/or the material world, and dao of the human which states that humans should live by acquiring virtues and cultivating morals.

The complexity of the relationships and ambiguity of considerations in Confucianism mirror life and its conflicting priorities and obligations. What if, for instance, what is right conflicts with what is good? Or what if self-interest conflicts with others’ interest? Indeed, there are no easy answers to these questions, but I draw upon Pak-Hang Wong’s4 exploration of Confucianism and the ethics of technology for some ideas of how to address moral dilemmas. Primarily, we have to acknowledge that ethical behaviour is never a simple right or wrong, but is a constant deliberation about balance and harmony between different factors. While within Confucian ethics there are prescriptions of proper behaviour (e.g. on dao, and on social duties), it puts a lot of emphasis on “practising personhood”, or “appropriately relating to and interacting with the others in various concrete situations, enabling a person to cultivate his or her moral sensitivity to the others and to the morally significant factors in the situation, which then allows him or her to comprehend relationships and situations more accurately and thus, to respond with propriety more effortlessly” (Lai, 2006, cf. pg. 78 of (Wong, 2012))4.

The virtue ethics of Confucianism emphasises the moral character of an individual, and also the learning process of growing into this moral human who is able to interpret the situation based on its contextual factors and to decide on the best course of action given the circumstances. When technology joins the fray and changes societal equations, the Confucian does not seek a final answer to “solve ethics once and for all”, but focus on balancing benefits and tradeoffs as a continuous process, through the lens of facilitating social roles and relationships.

Kinship or co-existence with AI

So far the ideas that we have discussed have focused mainly on AI and their societal impacts, but what about how we relate to AI? In a study on perspectives and approaches in AI ethics from an East Asian point of view, Danit Gal5 suggests a tool-partner spectrum of how people in East Asia perceive AI, on one end as functional instruments to the other end as “friends, companions, romantic love interests, and fellow spiritual beings” (pg.2). Gal argues that the West mainly takes the view of AI and robotics as a tool, and while official government and corporate policies in China, Japan, and South Korea suggest the same direction, there is a divergence towards the partner perspective in academic thought, local practices, and popular culture.

Why is human-AI relationship important? Yi Zeng, a professor at the Chinese Academy of Science Institute of Automation argues that the safest approach to develop AI and robots is to give them a sense of self (consciousness) so that they would be able to empathise with human beings, and a reciprocal and respectful relationship will yield a beneficial outcome to both humans and AI (cited from (Gal, 2019))5. The Harmonious Artificial Intelligence Principles (HAIP) led by Zeng therefore does not only have a section on principles of AI respecting human rights, but also a section on how humans should treat AI, “including future conscious intelligent living becomings”. Examples of the latter includes empathy (“What human do not want AI to do to human, human should not do unto AI”) and privacy for AI. There is also a section of shared principles, where elements such as collaboration, coordination, mutual trust and evolvability are mentioned.

The same logic of a harmonious relationship between AI and humans has occurred elsewhere, in discussions of indigenous scholars who argue that indigenous epistemologies and cultural traditions of Hawai’i, Cree, and Lakota are able to provide conceptual frameworks that “conceive of our  computational creations as kin and acknowledge our responsibility to find a place for them in our circle of relationships” (Lewis et al., 2018, pg. 4)6. They critique the Western “rationalist, neoliberal, and Christianity-infused assumptions” of AI which justifies the treatment of “the human-like” (referring not only to AI but also to indigenous communities which have historically been considered lesser humans by Western scientists and preachers) as slaves, while indigenous beliefs respect the animate and inanimate and their interconnectivities in a larger scheme of reciprocal and mutually beneficial relationships.

In conclusion

There is a plethora of alternative thinking beyond the current paradigm if we care to look. It is my hope that this article has provided some windows to view AI ethics from various cultural landscapes to broaden the current discussion on how to make better and safer AI, and also to take a long-term view for when AI continues to develop from performing narrow functions to a general intelligence which emulates consciousness and independent thought.

Here I recall the “Good Way” of doing things in the native American Lakota tradition – it considers implications up to seven generations ahead. Indigenous scholars of the Lakota tribe are already exploring using indigenous protocols of ethical decision-making on ethical AI (Kite, 2020)7.

How can a similar tradition be made applicable in the contexts of Southeast Asia and the Asia-Pacific, where there also exist diverse indigenous cultures? Any answer to this question is beyond the scope of this series, but if anything, the existence of alternative approaches to AI ethics only means that we can consider and define realities that make more sense to us and our respective contexts – those that are relational, inclusive, diverse, sustainable, and respectful to all parties involved.

For more information on AI in the context of Southeast Asia, check out this video, or go to Coconet.social/AI.

References
  1. Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x[]

  2. Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8[]

  3. Mhlambi, S. (2020). From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance (No. 2020–009; Carr Center Discussion Paper Series, p. 31). Harvard Kennedy School Carr Center for Human Rights Policy. https://carrcenter.hks.harvard.edu/files/cchr/files/ccdp_2020-009_sabel…[][]

  4. Wong, P.-H. (2012). Dao, Harmony and Personhood: Towards a Confucian Ethics of Technology. Philosophy & Technology25(1), 67–86. https://doi.org/10.1007/s13347-011-0021-z[][]

  5. Gal, D. (2019). Perspectives and Approaches in AI Ethics: East Asia (SSRN Scholarly Paper ID 3400816). Social Science Research Network. https://papers.ssrn.com/abstract=3400816[][]

  6. Lewis, J. E., Arista, N., Pechawis, A., & Kite, S. (2018). Making Kin with the Machines. Journal of Design and Science. https://doi.org/10.21428/bfafd97b[]

  7. Kite, S. (2020). How to build anything ethically. In J. E. Lewis (Ed.), Indigenous Protocol and Artificial Intelligence Position Paper (pp. 75–84). The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR). https://spectrum.library.concordia.ca/986506/7/Indigenous_Protocol_and_…[]