Towards Context-Aware Facial Emotion Reaction Database for Dyadic Interaction Settings

Sensors (Basel). 2023 Jan 1;23(1):458. doi: 10.3390/s23010458.

Abstract

Emotion recognition is a significant issue in many sectors that use human emotion reactions as communication for marketing, technological equipment, or human-robot interaction. The realistic facial behavior of social robots and artificial agents is still a challenge, limiting their emotional credibility in dyadic face-to-face situations with humans. One obstacle is the lack of appropriate training data on how humans typically interact in such settings. This article focused on collecting the facial behavior of 60 participants to create a new type of dyadic emotion reaction database. For this purpose, we propose a methodology that automatically captures the facial expressions of participants via webcam while they are engaged with other people (facial videos) in emotionally primed contexts. The data were then analyzed using three different Facial Expression Analysis (FEA) tools: iMotions, the Mini-Xception model, and the Py-Feat FEA toolkit. Although the emotion reactions were reported as genuine, the comparative analysis between the aforementioned models could not agree with a single emotion reaction prediction. Based on this result, a more-robust and -effective model for emotion reaction prediction is needed. The relevance of this work for human-computer interaction studies lies in its novel approach to developing adaptive behaviors for synthetic human-like beings (virtual or robotic), allowing them to simulate human facial interaction behavior in contextually varying dyadic situations with humans. This article should be useful for researchers using human emotion analysis while deciding on a suitable methodology to collect facial expression reactions in a dyadic setting.

Keywords: data collection; emotion reaction; emotion recognition; facial expression analysis; responsible AI.

MeSH terms

  • Awareness
  • Communication
  • Emotions*
  • Facial Expression
  • Humans
  • Interpersonal Relations*
  • Recognition, Psychology

Grants and funding

The work was supported by the EU Mobilitas Pluss grant (MOBTT90) of Pia Tikka, Enactive Virtuality Lab, Tallinn University. This work was also partially supported by the Estonian Centre of Excellence in IT (EXCITE), funded by the European Regional Development Fund.