Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?

Bioethics. 2022 Feb;36(2):143-153. doi: 10.1111/bioe.12918. Epub 2021 Jul 12.

Abstract

Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical frameworks to clinical decision-making entails conceptual questions: Is explicability a free-standing principle? Is it already covered by the well-established four bioethical principles? Or is it an independent value that needs to be recognized as such in medical practice? We discuss these questions in a conceptual-ethical analysis, which builds upon the findings of an empirical document analysis. On the example of the medical specialty of radiology, we analyze the position of radiological associations on the ethical use of medical AI. We address three questions: Are there references to explicability or a similar concept? What are the reasons for such inclusion? Which ethical concepts are referred to?

Keywords: black box; explainability; machine learning; medical ethics; principlism; transparency.

MeSH terms

  • Artificial Intelligence*
  • Ethical Analysis
  • Humans
  • Morals
  • Radiology*