Ethics of Friendship with AIs

Tugba Yoldas

University of Alberta

In this paper, I discuss friendship from the perspective of virtue ethics and claim that friendship with AI companions might be harmful to us because firstly, they do not actually meet our moral needs where friendship is concerned, and secondly, they threaten to undermine the virtue of friendship by introducing a morally defective mode of friendship and by transforming important societal norms about interpersonal relationships. I argue that designing ethically-aligned AI companions gains significant importance, especially in the areas where these companions come in contact with vulnerable groups in society such as children, the elderly, and people living with disabilities. In particular, I explore virtue-centered value-sensitive design in building AI companions for friendship. In conclusion, I suggest that AI companions could prove helpful, for example, as models to help children practice ethical and intellectual virtues (e.g., Constantinescu et. al., 2022), as mental health aids (e.g., Graham et. al., 2019), and as ethical mediators (e.g., Anderson, 2011; Moor, 1995; on artificial moral advisors, e.g., Giubilini and Savulescu, 2018) to help us make better moral decisions and navigate ethical dilemmas. But this is a possibility only if their design respects certain ethical principles such as trustworthiness, privacy, safety, and reliability.
This paper focuses on the Aristotelian framework of virtue ethics in the Book VIII of the Nicomachean Ethics to explore some of the potential moral implications of human-AI friendships. First, I discuss why friendship with present-day AI companions is not something that we should want. According to virtue ethics, friendship first and foremost requires “reciprocated goodwill” (Aristotle, 1881/1999, 1155b34) between the two who have similar values, empathy, and compassion for each other. I discuss that this is something that cannot be achieved in our relationships with existing AI companions. I argue that there cannot be genuine mutual care in current AI-human friendships, but rather, this kind of friendship is only coincidental. At least for now, human loneliness cannot be remedied by AI companions due to the inauthenticity of understanding and emotion that these technologies could provide us.
Recent AI companions can exhibit a range of emotional reactions (Milliez, 2018), and they are designed to look like humans. The more they become anthropomorphic, the more affective and social capacities people expect from these robots. Eventually, such additions to AI technologies bring to mind various ethical questions, one being the question of whether humans can make friends with AI companions in the near future. However, AI-human relationships would likely pose ethical limits on the manipulation of human psychology if these technologies were designed with the intention of deceiving (e.g., Turkle, 2011), especially vulnerable social groups, into thinking that understanding, care, and emotions are genuine. In addition, friendship with AI companions is likely to deteriorate our conception of friendship and lead to undesirable consequences such as the inability to deal with complex emotional human relationships, preference for individual independency at the cost of social interdependency, and more. Therefore, we should design these technologies as ethically-aligned companions rather than embracing them as our friends.

Chair: Aaron Wirt

Time: September 7th, 10:00-10:30

Location: SR 1.003


Posted

in

by