Individual differences in internal models explain idiosyncrasies in scene perception

Cognition. 2024 Apr:245:105723. doi: 10.1016/j.cognition.2024.105723. Epub 2024 Jan 22.

Abstract

According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences - starting from participants' personal expectations about the structure of real-world scenes.

Keywords: Drawing; Individual differences; Internal models; Predictive processing; Scene perception.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Individuality*
  • Neural Networks, Computer
  • Pattern Recognition, Visual*
  • Photic Stimulation / methods
  • Visual Perception