Emotional Speech Recognition Using Deep Neural Networks

Sensors (Basel). 2022 Feb 12;22(4):1414. doi: 10.3390/s22041414.

Abstract

The expression of emotions in human communication plays a very important role in the information that needs to be conveyed to the partner. The forms of expression of human emotions are very rich. It could be body language, facial expressions, eye contact, laughter, and tone of voice. The languages of the world's peoples are different, but even without understanding a language in communication, people can almost understand part of the message that the other partner wants to convey with emotional expressions as mentioned. Among the forms of human emotional expression, the expression of emotions through voice is perhaps the most studied. This article presents our research on speech emotion recognition using deep neural networks such as CNN, CRNN, and GRU. We used the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus for the study with four emotions: anger, happiness, sadness, and neutrality. The feature parameters used for recognition include the Mel spectral coefficients and other parameters related to the spectrum and the intensity of the speech signal. The data augmentation was used by changing the voice and adding white noise. The results show that the GRU model gave the highest average recognition accuracy of 97.47%. This result is superior to existing studies on speech emotion recognition with the IEMOCAP corpus.

Keywords: CNN; CRNN; GRU; IEMOCAP; data augmentation; emotion; recognition; speech.

MeSH terms

  • Emotions
  • Facial Expression
  • Humans
  • Neural Networks, Computer
  • Speech
  • Speech Perception*
  • Voice*