Successive-signal biasing for a learned sound sequence

Xiaoming Zhou*, Étienne De Villers-Sidani, Rogerio Panizzutti, Michael M. Merzenich

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Adult rats were trained to detect the occurrence of a two-element sound sequence in a background of nine other nontarget sound pairs. Training resulted in a modest, enduring, static expansion of the cortical areas of representation of both target stimulus sounds. More importantly, once the initial stimulus A in the target A-B sequence was presented, the cortical "map" changed dynamically, specifically to exaggerate further the representation of the "anticipated" stimulus B. If B occurred, it was represented over a larger cortical area by more strongly excited, more coordinated, and more selectively responding neurons. This biasing peaked at the expected time of B onset with respect to A onset. No dynamic biasing of responses was recorded for any sound presented in a nontarget pair. Responses to nontarget frequencies flanking the representation of B were reduced in area and in response strength only after the presentation of A at the expected time of B onset. This study shows that cortical areas are not representationally static but, to the contrary, can be biased moment by moment in time as a function of behavioral context.

Original languageEnglish
Pages (from-to)14839-14844
Number of pages6
JournalProceedings of the National Academy of Sciences of the United States of America
Volume107
Issue number33
DOIs
StatePublished - 17 Aug 2010

Keywords

  • Cortical representation
  • Perceptual training
  • Plasticity
  • Primary auditory cortex

Fingerprint

Dive into the research topics of 'Successive-signal biasing for a learned sound sequence'. Together they form a unique fingerprint.

Cite this