The time-course of cortical responses to speech revealed by fast optical imaging

Brain Lang. 2018 Sep:184:32-42. doi: 10.1016/j.bandl.2018.06.006. Epub 2018 Jun 27.

Abstract

Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.

Keywords: Event-related potentials; Optical imaging; Phonological categorization; Speech perception; Spoken language processing.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Brain / diagnostic imaging*
  • Brain / physiology
  • Cues
  • Electroencephalography
  • Female
  • Humans
  • Male
  • Neuroimaging
  • Optical Imaging
  • Phonetics
  • Speech / physiology*
  • Speech Perception / physiology*
  • Young Adult