Cortical field maps across human sensory cortex

Front Comput Neurosci. 2023 Dec 15:17:1232005. doi: 10.3389/fncom.2023.1232005. eCollection 2023.

Abstract

Cortical processing pathways for sensory information in the mammalian brain tend to be organized into topographical representations that encode various fundamental sensory dimensions. Numerous laboratories have now shown how these representations are organized into numerous cortical field maps (CMFs) across visual and auditory cortex, with each CFM supporting a specialized computation or set of computations that underlie the associated perceptual behaviors. An individual CFM is defined by two orthogonal topographical gradients that reflect two essential aspects of feature space for that sense. Multiple adjacent CFMs are then organized across visual and auditory cortex into macrostructural patterns termed cloverleaf clusters. CFMs within cloverleaf clusters are thought to share properties such as receptive field distribution, cortical magnification, and processing specialization. Recent measurements point to the likely existence of CFMs in the other senses, as well, with topographical representations of at least one sensory dimension demonstrated in somatosensory, gustatory, and possibly olfactory cortical pathways. Here we discuss the evidence for CFM and cloverleaf cluster organization across human sensory cortex as well as approaches used to identify such organizational patterns. Knowledge of how these topographical representations are organized across cortex provides us with insight into how our conscious perceptions are created from our basic sensory inputs. In addition, studying how these representations change during development, trauma, and disease serves as an important tool for developing improvements in clinical therapies and rehabilitation for sensory deficits.

Keywords: auditory field map; cloverleaf cluster; gustatory; periodotopy; retinotopy; somatotopy; tonotopy; visual field map.

Publication types

  • Review

Grants and funding

This work was supported in part by research grant #1329255 from the National Science Foundation Cognitive Sciences Program and by startup funds from the Department of Cognitive Sciences at the University of California, Irvine.