Images with harder-to-reconstruct visual representations leave stronger memory traces

Nat Hum Behav. 2024 May 13. doi: 10.1038/s41562-024-01870-3. Online ahead of print.

Abstract

Much of what we remember is not because of intentional selection, but simply a by-product of perceiving. This raises a foundational question about the architecture of the mind: how does perception interface with and influence memory? Here, inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory. In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy, but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models. We also confirm a prediction of this account with 'model-driven psychophysics'. This work establishes reconstruction error as an important signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.