Georgios Kalaitzidis
Charles University in Prague-Faculty of Humanities

This paper explores the nature of AI systems, specifically Large Language Models (LLMs), as distinct agents and the human tendency to anthropomorphize them. It addresses the application of concepts like ‘intentional action’ and ‘intentions’ to AI, exploring the philosophical, cognitive, and technical aspects that define agency and intentionality in AI’s linguistic outputs.
AI systems illustrate a unique form of agency through their ability to process inputs, make algorithm-based decisions, and produce impactful outputs, distinct from human agency due to the absence of self-awareness and genuine intentions. These systems function within predefined programming limitations and data, presenting a form of agency that is not fully comparable to that of biological entities.
The human tendency to ascribe human-like qualities to AI systems highlights a deep-rooted psychological phenomenon where the complexity and responsiveness of these systems evoke a perception of them possessing minds. This anthropomorphism is critical for understanding the ethical and social implications of integrating AI into society.
Further, unlike humans, whose linguistic outputs derive from conscious thoughts and intentions, AI systems generate text through statistical processes that do not originate from a place of philosophical intentionality. This raises concerns about the applicability of traditional concepts of intentions to AI systems.
In conclusion, AI systems, especially LLMs, exhibit a form of agency that is fundamentally different from humans defined by a lack of consciousness and intentional agency. The tendency to anthropomorphize AI requires a distinct understanding of these interactions. The discussion between technology and philosophy is vital as we advance, ensuring our philosophical frameworks keep pace with technological developments. This dialogue is crucial for ethical considerations, AI design and deployment, and evolving our conceptual understanding of agency and intentionality.

Chair: Triston Hanna
Time: September 11th, 16:20 – 16:50
Location: SR.1007
