An integrated deep learning model for motor intention recognition of multi-class EEG Signals in upper limb amputees

Comput Methods Programs Biomed. 2021 Jul:206:106121. doi: 10.1016/j.cmpb.2021.106121. Epub 2021 Apr 21.

Abstract

Background and objective: Recognition of motor intention based on electroencephalogram (EEG) signals has attracted considerable research interest in the field of pattern recognition due to its notable application of non-muscular communication and control for those with severe motor disabilities. In analysis of EEG data, achieving a higher classification performance is dependent on the appropriate representation of EEG features which is mostly characterized by one unique frequency before applying a learning model. Neglecting other frequencies of EEG signals could deteriorate the recognition performance of the model because each frequency has its unique advantages. Motivated by this idea, we propose to obtain distinguishable features with different frequencies by introducing an integrated deep learning model to accurately classify multiple classes of upper limb movement intentions.

Methods: The proposed model is a combination of long short-term memory (LSTM) and stacked autoencoder (SAE). To validate the method, four high-level amputees were recruited to perform five motor intention tasks. The acquired EEG signals were first preprocessed before exploring the consequence of input representation on the performance of LSTM-SAE by feeding four frequency bands related to the tasks into the model. The learning model was further improved by t-distributed stochastic neighbor embedding (t-SNE) to eliminate feature redundancy, and to enhance the motor intention recognition.

Results: The experimental results of the classification performance showed that the proposed model achieves an average performance of 99.01% for accuracy, 99.10% for precision, 99.09% for recall, 99.09% for f1_score, 99.77% for specificity, and 99.0% for Cohen's kappa, across multi-subject and multi-class scenarios. Further evaluation with 2-dimensional t-SNE revealed that the signal decomposition has a distinct multi-class separability in the feature space.

Conclusion: This study demonstrated the predominance of the proposed model in its ability to accurately classify upper limb movements from multiple classes of EEG signals, and its potential application in the development of a more intuitive and naturalistic prosthetic control.

Keywords: Brain computer interface (BCI); Electroencephalography (EEG); Long short-term memory (LSTM); Motor intention (MI); Stacked Autoencoder (SAE); t-distributed stochastic neighbor embedding (t-SNE).

MeSH terms

  • Amputees*
  • Brain-Computer Interfaces*
  • Deep Learning*
  • Electroencephalography
  • Humans
  • Intention
  • Upper Extremity