Predicting Visual Fixations

Annu Rev Vis Sci. 2023 Sep 15:9:269-291. doi: 10.1146/annurev-vision-120822-072528. Epub 2023 Jul 7.

Abstract

As we navigate and behave in the world, we are constantly deciding, a few times per second, where to look next. The outcomes of these decisions in response to visual input are comparatively easy to measure as trajectories of eye movements, offering insight into many unconscious and conscious visual and cognitive processes. In this article, we review recent advances in predicting where we look. We focus on evaluating and comparing models: How can we consistently measure how well models predict eye movements, and how can we judge the contribution of different mechanisms? Probabilistic models facilitate a unified approach to fixation prediction that allows us to use explainable information explained to compare different models across different settings, such as static and video saliency, as well as scanpath prediction. We review how the large variety of saliency maps and scanpath models can be translated into this unifying framework, how much different factors contribute, and how we can select the most informative examples for model comparison. We conclude that the universal scale of information gain offers a powerful tool for the inspection of candidate mechanisms and experimental design that helps us understand the continual decision-making process that determines where we look.

Keywords: benchmarking; eye movements; fixations; information theory; model comparison; saliency; taxonomy; transfer learning; unifying framework.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Eye Movements*
  • Fixation, Ocular*