Visualization and Semantic Labeling of Mood States Based on Time-Series Features of Eye Gaze and Facial Expressions by Unsupervised Learning

Healthcare (Basel). 2022 Aug 8;10(8):1493. doi: 10.3390/healthcare10081493.

Abstract

This study is intended to develop a stress measurement and visualization system for stress management in terms of simplicity and reliability. We present a classification and visualization method of mood states based on unsupervised machine learning (ML) algorithms. Our proposed method attempts to examine the relation between mood states and extracted categories in human communication from facial expressions, gaze distribution area and density, and rapid eye movements, defined as saccades. Using a psychological check sheet and a communication video with an interlocutor, an original benchmark dataset was obtained from 20 subjects (10 male, 10 female) in their 20s for four or eight weeks at weekly intervals. We used a Profile of Mood States Second edition (POMS2) psychological check sheet to extract total mood disturbance (TMD) and friendliness (F). These two indicators were classified into five categories using self-organizing maps (SOM) and U-Matrix. The relation between gaze and facial expressions was analyzed from the extracted five categories. Data from subjects in the positive categories were found to have a positive correlation with the concentrated distributions of gaze and saccades. Regarding facial expressions, the subjects showed a constant expression time of intentional smiles. By contrast, subjects in negative categories experienced a time difference in intentional smiles. Moreover, three comparative experiment results demonstrated that the feature addition of gaze and facial expressions to TMD and F clarified category boundaries obtained from U-Matrix. We verify that the use of SOM and its two variants is the best combination for the visualization of mood states.

Keywords: U-Matrix; facial expressions; human communication; mental health; saccades; self-organizing maps.

Grants and funding

This research received no external funding.