Eye and head movements while encoding and recognizing panoramic scenes in virtual reality

PLoS One. 2023 Feb 17;18(2):e0282030. doi: 10.1371/journal.pone.0282030. eCollection 2023.

Abstract

One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Eye Movements
  • Head Movements*
  • Humans
  • Photic Stimulation / methods
  • Recognition, Psychology
  • Virtual Reality*

Grants and funding

This research was supported by the Natural Sciences and Engineering Research Council of Canada (NCA: Postdoctoral Fellowship; WFB: RGPIN-2015-05560, AK: RGPIN-2016-04319). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.