The anthropomorphic hype and its Bard: Dangers and opportunities of anthropomorphism in AI rhetoric

Anni Bukowski

Leibniz University Hannover

In this text, I examine the implications of anthropomorphic rhetoric regarding AI. I provide examples of anthropomorphism in AI rhetoric and I argue that anthropomorphizing AI in public discourse is neither fundamentally negative nor intrinsically positive, but it is inappropriate in some cases and constructive in others. I introduce some current critical perspectives on the matter (Hasan 2023 and Watson 2019), and I claim that a more nuanced view is needed in order to recognize both dangerous and beneficial uses of anthropomorphism in AI discourse. First, I analyse cases in which anthropomorphic rhetoric is inappropriate or dangerous, e.g. when it creates misleading narratives or when it hinders our ability to respond to pressing ethical challenges. I present Watson’s (2019) view, according to which humanizing algorithms can result in loss of autonomy and agency in our decisions. Then, I consider some arguments in favour of anthropomorphism. In particular, I analyse anthropomorphism through the lens of Daniel Dennett’s intentional stance, considering the argument that ascribing intentions, motivations and other human characteristics to non-human systems may help us predict the systems’ behaviour. I also suggest that in some circumstances it is impossible to describe AI systems’ behaviours without using a terminology that refers to human characteristics or abilities. I conclude that, since anthropomorphic rhetoric might lead to incorrect perceptions or expectations regarding AI systems, we should anthropomorphize only when we have compelling reasons to do so. My analysis could be a starting point for a more systematic investigation of AI rhetoric and its implications: in this framework, my thesis is significant because it helps uncover possible problematic ways in which AI is being anthropomorphized and distinguish them from productive forms of anthropomorphic rhetoric.

Chair: Lucas Timmerman

Time: 03 September, 14:40 – 15:10

Location: SR 1.005


Posted

in

by