Artificial cognition: How experimental psychology can help generate explainable artificial intelligence

Psychon Bull Rev. 2021 Apr;28(2):454-475. doi: 10.3758/s13423-020-01825-5. Epub 2020 Nov 6.

Abstract

Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this paper, we describe how cognitive psychologists can make contributions to XAI. The human mind is also a black box, and cognitive psychologists have over 150 years of experience modeling it through experimentation. We ought to translate the methods and rigor of cognitive psychology to the study of artificial black boxes in the service of explainability. We provide a review of XAI for psychologists, arguing that current methods possess a blind spot that can be complemented by the experimental cognitive tradition. We also provide a framework for research in XAI, highlight exemplary cases of experimentation within XAI inspired by psychological science, and provide a tutorial on experimenting with machines. We end by noting the advantages of an experimental approach and invite other psychologists to conduct research in this exciting new field.

Keywords: Comparative cognition; Hypothesis testing.

Publication types

  • Review

MeSH terms

  • Artificial Intelligence*
  • Cognition*
  • Humans
  • Psychology, Experimental*