Benchmarking explanation methods for mental state decoding with deep learning models

Neuroimage. 2023 Jun:273:120109. doi: 10.1016/j.neuroimage.2023.120109. Epub 2023 Apr 13.

Abstract

Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e.g., experiencing anger or joy) and brain activity by identifying those spatial and temporal features of brain activity that allow to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we benchmark prominent explanation methods in a mental state decoding analysis of multiple functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its faithfulness and its alignment with other empirical evidence on the mapping between brain activity and decoded mental state: explanation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that align less well with other empirical evidence than the explanations of methods with less faithfulness. Based on our findings, we provide guidance for neuroimaging researchers on how to choose an explanation method to gain insight into the mental state decoding decisions of DL models.

Keywords: Benchmark; Deep learning; Explainable AI; Mental state decoding; Neuroimaging.

MeSH terms

  • Artificial Intelligence
  • Benchmarking
  • Brain Mapping / methods
  • Brain* / diagnostic imaging
  • Deep Learning*
  • Humans
  • Magnetic Resonance Imaging / methods