Predicting the Valence of a Scene from Observers' Eye Movements

PLoS One. 2015 Sep 25;10(9):e0138198. doi: 10.1371/journal.pone.0138198. eCollection 2015.

Abstract

Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention / physiology
  • Emotions / physiology*
  • Eye Movements / physiology*
  • Female
  • Fixation, Ocular / physiology
  • Humans
  • Male
  • Observer Variation
  • Photic Stimulation
  • Recognition, Psychology* / physiology
  • User-Computer Interface
  • Visual Perception / physiology*

Associated data

  • figshare/10.6084/M9.FIGSHARE.1533078

Grants and funding

This work was partially supported by Nokia Scholarships. This does not alter the authors’ adherence to all the PLOS ONE policies on sharing data and materials.