Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability

Sci Eng Ethics. 2020 Aug;26(4):2051-2068. doi: 10.1007/s11948-019-00146-8. Epub 2019 Oct 24.

Abstract

This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of "many things" is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or "patients" of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.

Keywords: Answerability; Artificial intelligence (AI); Explainability; Moral agency; Moral patiency; Problem of many hands; Responsibility; Responsibility attribution; Responsibility conditions; Transparency.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Knowledge*