Can Robotic AI Systems Be Virtuous and Why Does This Matter?

Int J Soc Robot. 2022;14(6):1547-1557. doi: 10.1007/s12369-022-00887-w. Epub 2022 Jun 11.

Abstract

The growing use of social robots in times of isolation refocuses ethical concerns for Human-Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of "virtuous robotic AI systems". In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human-Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.

Keywords: AAMA; HRI; Isolation robots; Loneliness robots; Virtue; Virtue ethics.