An Interpretable Deep Learning Model for Speech Activity Detection Using Electrocorticographic Signals

IEEE Trans Neural Syst Rehabil Eng. 2022:30:2783-2792. doi: 10.1109/TNSRE.2022.3207624. Epub 2022 Oct 10.

Abstract

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Brain
  • Brain-Computer Interfaces*
  • Deep Learning*
  • Electrocorticography
  • Humans
  • Speech