Applying Common Spatial Pattern and Convolutional Neural Network to Classify Movements via EEG Signals

Clin EEG Neurosci. 2024 Mar 24:15500594241234836. doi: 10.1177/15500594241234836. Online ahead of print.

Abstract

Developing an electroencephalography (EEG)-based brain-computer interface (BCI) system is crucial to enhancing the control of external prostheses by accurately distinguishing various movements through brain signals. This innovation can provide comfortable circumstances for the populace who have movement disabilities. This study combined the most prospering methods used in BCI systems, including one-versus-rest common spatial pattern (OVR-CSP) and convolutional neural network (CNN), to automatically extract features and classify eight different movements of the shoulder, wrist, and elbow via EEG signals. The number of subjects who participated in the experiment was 10, and their EEG signals were recorded while performing movements at fast and slow speeds. We used preprocessing techniques before transforming EEG signals into another space by OVR-CSP, followed by sending signals into the CNN architecture consisting of four convolutional layers. Moreover, we extracted feature vectors after applying OVR-CSP and considered them as inputs to KNN, SVM, and MLP classifiers. Then, the performance of these classifiers was compared with the CNN method. The results demonstrated that the classification of eight movements using the proposed CNN architecture obtained an average accuracy of 97.65% for slow movements and 96.25% for fast movements in the subject-independent model. This method outperformed other classifiers with a substantial difference; ergo, it can be useful in improving BCI systems for better control of prostheses.

Keywords: brain–computer interface; convolutional neural network; electroencephalography; movements; one-versus-rest common spatial pattern.