Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal

Sensors (Basel). 2022 Dec 15;22(24):9859. doi: 10.3390/s22249859.

Abstract

State-of-the-art healthcare technologies are incorporating advanced Artificial Intelligence (AI) models, allowing for rapid and easy disease diagnosis. However, most AI models are considered "black boxes," because there is no explanation for the decisions made by these models. Users may find it challenging to comprehend and interpret the results. Explainable AI (XAI) can explain the machine learning (ML) outputs and contribution of features in disease prediction models. Electroencephalography (EEG) is a potential predictive tool for understanding cortical impairment caused by an ischemic stroke and can be utilized for acute stroke prediction, neurologic prognosis, and post-stroke treatment. This study aims to utilize ML models to classify the ischemic stroke group and the healthy control group for acute stroke prediction in active states. Moreover, XAI tools (Eli5 and LIME) were utilized to explain the behavior of the model and determine the significant features that contribute to stroke prediction models. In this work, we studied 48 patients admitted to a hospital with acute ischemic stroke and 75 healthy adults who had no history of identified other neurological illnesses. EEG was obtained within three months following the onset of ischemic stroke symptoms using frontal, central, temporal, and occipital cortical electrodes (Fz, C1, T7, Oz). EEG data were collected in an active state (walking, working, and reading tasks). In the results of the ML approach, the Adaptive Gradient Boosting models showed around 80% accuracy for the classification of the control group and the stroke group. Eli5 and LIME were utilized to explain the behavior of the stroke prediction model and interpret the model locally around the prediction. The Eli5 and LIME interpretable models emphasized the spectral delta and theta features as local contributors to stroke prediction. From the findings of this explainable AI research, it is expected that the stroke-prediction XAI model will help with post-stroke treatment and recovery, as well as help healthcare professionals, make their diagnostic decisions more explainable.

Keywords: Eli5; LIME; electroencephalography; explainable AI; machine-learning; stroke.

MeSH terms

  • Adult
  • Artificial Intelligence
  • Electroencephalography
  • Humans
  • Ischemic Stroke*
  • Stroke* / diagnosis

Substances

  • lime

Grants and funding

This work was supported by the National Research Council of Science and Technology (NST) Grant through Korean Government former Ministry of Science, ICT and Future Planning (MSIP), succeeded by Ministry of Science and ICT, under Grant CRC-15-05-ETRI. No funding received for APC.