Interpreting mental state decoding with deep learning models

Trends Cogn Sci. 2022 Nov;26(11):972-986. doi: 10.1016/j.tics.2022.07.003.

Abstract

In mental state decoding, researchers aim to identify the set of mental states (e.g., experiencing happiness or fear) that can be reliably identified from the activity patterns of a brain region (or network). Deep learning (DL) models are highly promising for mental state decoding because of their unmatched ability to learn versatile representations of complex data. However, their widespread application in mental state decoding is hindered by their lack of interpretability, difficulties in applying them to small datasets, and in ensuring their reproducibility and robustness. We recommend approaching these challenges by leveraging recent advances in explainable artificial intelligence (XAI) and transfer learning, and also provide recommendations on how to improve the reproducibility and robustness of DL models in mental state decoding.

Keywords: deep learning; explainable artificial intelligence; mental state decoding; neuroimaging; reproducibility; robustness; transfer learning.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Artificial Intelligence*
  • Brain
  • Brain Mapping*
  • Deep Learning*
  • Humans
  • Machine Learning
  • Neuroimaging
  • Reproducibility of Results