Sensory processing during viewing of cinematographic material: computational modeling and functional neuroimaging

Neuroimage. 2013 Feb 15:67:213-26. doi: 10.1016/j.neuroimage.2012.11.031. Epub 2012 Nov 29.

Abstract

The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Auditory Cortex / physiology*
  • Auditory Perception / physiology*
  • Brain Mapping / methods*
  • Computer Simulation
  • Female
  • Humans
  • Image Interpretation, Computer-Assisted / methods*
  • Magnetic Resonance Imaging / methods
  • Male
  • Models, Neurological*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Television
  • Visual Cortex / physiology*
  • Visual Perception / physiology*
  • Young Adult