Applying Self-Supervised Representation Learning for Emotion Recognition Using Physiological Signals

Sensors (Basel). 2022 Nov 23;22(23):9102. doi: 10.3390/s22239102.

Abstract

The use of machine learning (ML) techniques in affective computing applications focuses on improving the user experience in emotion recognition. The collection of input data (e.g., physiological signals), together with expert annotations are part of the established standard supervised learning methodology used to train human emotion recognition models. However, these models generally require large amounts of labeled data, which is expensive and impractical in the healthcare context, in which data annotation requires even more expert knowledge. To address this problem, this paper explores the use of the self-supervised learning (SSL) paradigm in the development of emotion recognition methods. This approach makes it possible to learn representations directly from unlabeled signals and subsequently use them to classify affective states. This paper presents the key concepts of emotions and how SSL methods can be applied to recognize affective states. We experimentally analyze and compare self-supervised and fully supervised training of a convolutional neural network designed to recognize emotions. The experimental results using three emotion datasets demonstrate that self-supervised representations can learn widely useful features that improve data efficiency, are widely transferable, are competitive when compared to their fully supervised counterparts, and do not require the data to be labeled for learning.

Keywords: emotion recognition; physiological signals; representation learning; self-supervised learning; wearable sensors.

MeSH terms

  • Algorithms*
  • Emotions / physiology
  • Humans
  • Machine Learning
  • Neural Networks, Computer*
  • Recognition, Psychology

Grants and funding

This research received no external funding.