A Deep Learning Driven Simulation Analysis of the Emotional Profiles of Depression Based on Facial Expression Dynamics

Clin Psychopharmacol Neurosci. 2024 Feb 29;22(1):87-94. doi: 10.9758/cpn.23.1059. Epub 2023 Jun 8.

Abstract

Objective: : Diagnosis and assessment of depression rely on scoring systems based on questionnaires, either self-reported by patients or administered by clinicians, and observation of patient facial expressions during the interviews plays a crucial role in making impressions in clinical settings. Deep learning driven approaches can assist clinicians in the course of diagnosis of depression by recognizing subtle facial expressions and emotions in depression patients.

Methods: : Seventeen simulated patients who acted as depressed patients participated in this study. A trained psychiatrist structurally interviewed each participant with moderate depression in accordance with a prepared scenario and without depressive features. Interviews were video-recorded, and a facial emotion recognition algorithm was used to classify emotions of each frame.

Results: : Among seven emotions (anger, disgust, fear, happiness, neutral, sadness, and surprise), sadness was expressed in a higher proportion on average in the depression-simulated group compared to the normal group. Neutral and fear were expressed in higher proportions on average in the normal group compared to the normal group. The overall distribution of emotions between the two groups was significantly different (p < 0.001). Variance in emotion was significantly less in the depression-simulated group (p < 0.05).

Conclusion: : This study suggests a novel and practical approach to understand the emotional expression of depression patients based on deep learning techniques. Further research would allow us to obtain more perspectives on the emotional profiles of clinical patients, potentially providing helpful insights in making diagnosis of depression patients.

Keywords: Deep learning; Depression; Diagnosis; Dynamic; Facial expression.

Grants and funding

Funding This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2021R1I1A1A01040317).