Multistability in auditory stream segregation: a predictive coding view

Philos Trans R Soc Lond B Biol Sci. 2012 Apr 5;367(1591):1001-12. doi: 10.1098/rstb.2011.0359.

Abstract

Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Acoustic Stimulation
  • Auditory Pathways / physiology
  • Auditory Perception / physiology*
  • Brain / physiology
  • Humans
  • Models, Neurological
  • Models, Psychological
  • Time Factors