Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study

IEEE Int Conf Rehabil Robot. 2019 Jun:2019:689-693. doi: 10.1109/ICORR.2019.8779499.

Abstract

For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \pm 9$ % and $75\pm 7$ % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \pm 9$ % and $67\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.

MeSH terms

  • Brain-Computer Interfaces*
  • Electroencephalography*
  • Feasibility Studies
  • Female
  • Humans
  • Male
  • Motor Activity / physiology
  • Movement
  • Speech / physiology*
  • Young Adult