Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification

Sensors (Basel). 2021 Oct 11;21(20):6744. doi: 10.3390/s21206744.

Abstract

In this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain-computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 healthy subjects during silent speech of eight different Russia words (commands): 'forward', 'backward', 'up', 'down', 'help', 'take', 'stop', and 'release', and one pseudoword. We began by demonstrating that silent word distributions can be very close statistically and that there are words describing directed movements that share similar patterns of brain activity. However, after training one individual, we achieved 85% accuracy performing 9 words (including pseudoword) classification and 88% accuracy on binary classification on average. We show that a smaller dataset collected on one participant allows for building a more accurate classifier for a given subject than a larger dataset collected on a group of people. At the same time, we show that the learning outcomes on a limited sample of EEG-data are transferable to the general population. Thus, we demonstrate the possibility of using selected command-words to create an EEG-based input device for people on whom the neural network classifier has not been trained, which is particularly important for people with disabilities.

Keywords: EEG; EEG-BCI; brain–computer interface; deep learning; eSports; imagined speech; neurodegeneration; neurodegeneration treatment; neurorehabilitation; senescence; silent speech; speech recognition.

MeSH terms

  • Brain-Computer Interfaces*
  • Electroencephalography
  • Humans
  • Neural Networks, Computer
  • Speech
  • Speech Perception*