Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations

Cognition. 2021 Jan:206:104465. doi: 10.1016/j.cognition.2020.104465. Epub 2020 Oct 20.

Abstract

Eye movements are vital for human vision, and it is therefore important to understand how observers decide where to look. Meaning maps (MMs), a technique to capture the distribution of semantic information across an image, have recently been proposed to support the hypothesis that meaning rather than image features guides human gaze. MMs have the potential to be an important tool far beyond eye-movements research. Here, we examine central assumptions underlying MMs. First, we compared the performance of MMs in predicting fixations to saliency models, showing that DeepGaze II - a deep neural network trained to predict fixations based on high-level features rather than meaning - outperforms MMs. Second, we show that whereas human observers respond to changes in meaning induced by manipulating object-context relationships, MMs and DeepGaze II do not. Together, these findings challenge central assumptions underlying the use of MMs to measure the distribution of meaning in images.

Keywords: Deep neural networks; Eye movements; Meaning maps; Natural scenes; Saliency.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Eye Movements*
  • Humans
  • Neural Networks, Computer*
  • Semantics