A Hybrid Deep Learning Emotion Classification System Using Multimodal Data

Sensors (Basel). 2023 Nov 22;23(23):9333. doi: 10.3390/s23239333.

Abstract

This paper proposes a hybrid deep learning emotion classification system (HDECS), a hybrid multimodal deep learning system designed for emotion classification in a specific national language. Emotion classification is important in diverse fields, including tailored corporate services, AI advancement, and more. Additionally, most sentiment classification techniques in speaking situations are based on a single modality: voice, conversational text, vital signs, etc. However, analyzing these data presents challenges because of the variations in vocal intonation, text structures, and the impact of external stimuli on physiological signals. Korean poses challenges in natural language processing, including subject omission and spacing issues. To overcome these challenges and enhance emotion classification performance, this paper presents a case study using Korean multimodal data. The case study model involves retraining two pretrained models, LSTM and CNN, until their predictions on the entire dataset reach an agreement rate exceeding 0.75. Predictions are used to generate emotional sentences appended to script data, which are further processed using BERT for final emotion prediction. The research result is evaluated by using categorical cross-entropy (CCE) to measure the difference between the model's predictions and actual labels, F1 score, and accuracy. According to the evaluation, the case model outperforms the existing KLUE/roBERTa model with improvements of 0.5 in CCE, 0.09 in accuracy, and 0.11 in F1 score. As a result, the HDECS is expected to perform well not only on Korean multimodal datasets but also on sentiment classification considering the speech characteristics of various languages and regions.

Keywords: BERT; deep learning; emotion classification; multimodal.

MeSH terms

  • Communication
  • Deep Learning*
  • Emotions*
  • Entropy
  • Humans

Grants and funding

This research received no external funding.