Distinguishing Emotional Responses to Photographs and Artwork Using a Deep Learning-Based Approach

Sensors (Basel). 2019 Dec 14;19(24):5533. doi: 10.3390/s19245533.

Abstract

Visual stimuli from photographs and artworks raise corresponding emotional responses. It is a long process to prove whether the emotions that arise from photographs and artworks are different or not. We answer this question by employing electroencephalogram (EEG)-based biosignals and a deep convolutional neural network (CNN)-based emotion recognition model. We employ Russell's emotion model, which matches emotion keywords such as happy, calm or sad to a coordinate system whose axes are valence and arousal, respectively. We collect photographs and artwork images that match the emotion keywords and build eighteen one-minute video clips for nine emotion keywords for photographs and artwork. We hired forty subjects and executed tests about the emotional responses from the video clips. From the t-test on the results, we concluded that the valence shows difference, while the arousal does not.

Keywords: CNN; DEAP dataset; Russell model; artwork; emotion recognition.

MeSH terms

  • Deep Learning*
  • Electroencephalography
  • Emotions / physiology*
  • Humans
  • Models, Theoretical
  • Neural Networks, Computer