Computational modeling of human multisensory spatial representation by a neural architecture

PLoS One. 2023 Mar 8;18(3):e0280987. doi: 10.1371/journal.pone.0280987. eCollection 2023.

Abstract

Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Auditory Perception
  • Brain
  • Computer Simulation
  • Humans
  • Illusions*
  • Photic Stimulation
  • Visual Perception*

Grants and funding

The research was partially supported by the MYSpace project (principal investigator MG), which has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (Grant 948349). No additional external funding was received for this study.