Rapid computations of spectrotemporal prediction error support perception of degraded speech

Elife. 2020 Nov 4:9:e58077. doi: 10.7554/eLife.58077.

Abstract

Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in speech are more strongly represented in neural responses than alternative speech representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between speech signal quality and expectations from prior written text on the quality of neural representations; increased signal quality enhanced neural representations of speech that mismatched with prior expectations, but led to greater suppression of speech that matched prior expectations. This interaction is a unique neural signature of prediction error computations and is apparent in neural responses within 100 ms of speech input. Our findings contribute to the detailed specification of a computational model of speech perception based on predictive coding frameworks.

Keywords: MEG; human; neuroscience; predictive coding; spectrotemporal modulations; speech perception.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Bayes Theorem
  • Brain / physiology*
  • Brain / physiopathology*
  • Computer Simulation
  • Female
  • Humans
  • Linear Models
  • Magnetoencephalography*
  • Male
  • Neurons / physiology
  • Regression Analysis
  • Speech
  • Speech Disorders / physiopathology*
  • Speech Perception*
  • Young Adult