Vision and the representation of the surroundings in spatial memory

Philos Trans R Soc Lond B Biol Sci. 2011 Feb 27;366(1564):596-610. doi: 10.1098/rstb.2010.0188.

Abstract

One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Humans
  • Memory / physiology*
  • Psychomotor Performance / physiology*
  • Space Perception / physiology*
  • Visual Perception / physiology*