Machine learning-empowered sleep staging classification using multi-modality signals

BMC Med Inform Decis Mak. 2024 May 6;24(1):119. doi: 10.1186/s12911-024-02522-2.

Abstract

The goal is to enhance an automated sleep staging system's performance by leveraging the diverse signals captured through multi-modal polysomnography recordings. Three modalities of PSG signals, namely electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), were considered to obtain the optimal fusions of the PSG signals, where 63 features were extracted. These include frequency-based, time-based, statistical-based, entropy-based, and non-linear-based features. We adopted the ReliefF (ReF) feature selection algorithms to find the suitable parts for each signal and superposition of PSG signals. Twelve top features were selected while correlated with the extracted feature sets' sleep stages. The selected features were fed into the AdaBoost with Random Forest (ADB + RF) classifier to validate the chosen segments and classify the sleep stages. This study's experiments were investigated by obtaining two testing schemes: epoch-wise testing and subject-wise testing. The suggested research was conducted using three publicly available datasets: ISRUC-Sleep subgroup1 (ISRUC-SG1), sleep-EDF(S-EDF), Physio bank CAP sleep database (PB-CAPSDB), and S-EDF-78 respectively. This work demonstrated that the proposed fusion strategy overestimates the common individual usage of PSG signals.

Keywords: AASM rules; Epoch-wise analysis; Machine learning; Multi-modal analysis; Polysomnography signals; Random forest; Sleep staging.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Electroencephalography*
  • Electromyography*
  • Electrooculography*
  • Female
  • Humans
  • Machine Learning*
  • Male
  • Polysomnography*
  • Signal Processing, Computer-Assisted
  • Sleep Stages* / physiology