Unraveling motor imagery brain patterns using explainable artificial intelligence based on Shapley values

Comput Methods Programs Biomed. 2024 Apr:246:108048. doi: 10.1016/j.cmpb.2024.108048. Epub 2024 Jan 30.

Abstract

Background and objective: Motor imagery (MI) based brain-computer interfaces (BCIs) are widely used in rehabilitation due to the close relationship that exists between MI and motor execution (ME). However, the underlying brain mechanisms of MI remain not well understood. Most MI-BCIs use the sensorimotor rhythms elicited in the primary motor cortex (M1) and somatosensory cortex (S1), which consist of an event-related desynchronization followed by an event-related synchronization. Consequently, this has resulted in systems that only record signals around M1 and S1. However, MI could involve a more complex network including sensory, association, and motor areas. In this study, we hypothesize that the superior accuracies achieved by new deep learning (DL) models applied to MI decoding rely on focusing on a broader MI activation of the brain. Parallel to the success of DL, the field of explainable artificial intelligence (XAI) has seen continuous development to provide explanations for DL networks success. The goal of this study is to use XAI in combination with DL to extract information about MI brain activation patterns from non-invasive electroencephalography (EEG) signals.

Methods: We applied an adaptation of Shapley additive explanations (SHAP) to EEGSym, a state-of-the-art DL network with exceptional transfer learning capabilities for inter-subject MI classification. We obtained the SHAP values from two public databases comprising 171 users generating left and right hand MI instances with and without real-time feedback.

Results: We found that EEGSym based most of its prediction on the signal of the frontal electrodes, i.e. F7 and F8, and on the first 1500 ms of the analyzed imagination period. We also found that MI involves a broad network not only based on M1 and S1, but also on the prefrontal cortex (PFC) and the posterior parietal cortex (PPC). We further applied this knowledge to select a 8-electrode configuration that reached inter-subject accuracies of 86.5% ± 10.6% on the Physionet dataset and 88.7% ± 7.0% on the Carnegie Mellon University's dataset.

Conclusion: Our results demonstrate the potential of combining DL and SHAP-based XAI to unravel the brain network involved in producing MI. Furthermore, SHAP values can optimize the requirements for out-of-laboratory BCI applications involving real users.

Keywords: Brain-computer interface (BCI); Deep learning (DL); Explainable artificial intelligence (XAI); Motor imagery (MI); Sensorimotor rhythms (SMR); Shapley additive explanations (SHAP).

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Brain / physiology
  • Brain-Computer Interfaces*
  • Electroencephalography / methods
  • Humans
  • Imagination / physiology
  • Movement / physiology