Semantic-hierarchical model improves classification of spoken-word evoked electrocorticography

J Neurosci Methods. 2019 Jan 1:311:253-258. doi: 10.1016/j.jneumeth.2018.10.034. Epub 2018 Oct 30.

Abstract

Classification of spoken word-evoked potentials is useful for both neuroscientific and clinical applications including brain-computer interfaces (BCIs). By evaluating whether adopting a biology-based structure improves a classifier's accuracy, we can investigate the importance of such structure in human brain circuitry, and advance BCI performance. In this study, we propose a semantic-hierarchical structure for classifying spoken word-evoked cortical responses. The proposed structure decodes the semantic grouping of the words first (e.g., a body part vs. a number) and then decodes which exact word was heard. The proposed classifier structure exhibited a consistent ∼10% improvement of classification accuracy when compared with a non-hierarchical structure. Our result provides a tool for investigating the neural representation of semantic hierarchy and the acoustic properties of spoken words in human brains. Our results suggest an improved algorithm for BCIs operated by decoding heard, and possibly imagined, words.

Keywords: Brain computer interface; Decoding words; Electrocorticography; Semantic hierarchical structure.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Algorithms
  • Brain / physiology*
  • Electrocorticography
  • Evoked Potentials
  • Humans
  • Male
  • Models, Neurological*
  • Pattern Recognition, Automated / methods*
  • Semantics*
  • Signal Processing, Computer-Assisted*
  • Speech
  • Speech Perception / physiology*
  • Young Adult