Successive-signal biasing for a learned sound sequence

Proc Natl Acad Sci U S A. 2010 Aug 17;107(33):14839-44. doi: 10.1073/pnas.1009433107. Epub 2010 Aug 2.

Abstract

Adult rats were trained to detect the occurrence of a two-element sound sequence in a background of nine other nontarget sound pairs. Training resulted in a modest, enduring, static expansion of the cortical areas of representation of both target stimulus sounds. More importantly, once the initial stimulus A in the target A-B sequence was presented, the cortical "map" changed dynamically, specifically to exaggerate further the representation of the "anticipated" stimulus B. If B occurred, it was represented over a larger cortical area by more strongly excited, more coordinated, and more selectively responding neurons. This biasing peaked at the expected time of B onset with respect to A onset. No dynamic biasing of responses was recorded for any sound presented in a nontarget pair. Responses to nontarget frequencies flanking the representation of B were reduced in area and in response strength only after the presentation of A at the expected time of B onset. This study shows that cortical areas are not representationally static but, to the contrary, can be biased moment by moment in time as a function of behavioral context.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Animals
  • Auditory Cortex / cytology
  • Auditory Cortex / physiology*
  • Behavior, Animal / physiology
  • Brain Mapping
  • Discrimination, Psychological / physiology
  • Female
  • Learning / physiology*
  • Models, Neurological
  • Neurons / cytology
  • Neurons / physiology*
  • Rats
  • Rats, Sprague-Dawley
  • Sound*