Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer

Sensors (Basel). 2023 Jul 7;23(13):6212. doi: 10.3390/s23136212.

Abstract

Speech emotion recognition (SER) is a challenging task in human-computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.

Keywords: convolutional Transformer encoder; convolutional neural networks; multi-head attention; spatial features; speech emotion recognition; temporal features.

MeSH terms

  • Algorithms
  • Computer Systems
  • Emotions
  • Humans
  • Neural Networks, Computer*
  • Speech*

Grants and funding

This research project is supported by the Second Century Fund (C2F), Chulalongkorn University. Mohammad Alibakhshikenari acknowledges the support from the CONEXPlus programme funded by Universidad Carlos III de Madrid and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 801538. The authors also sincerely appreciate funding from Researchers Supporting Project number (RSPD2023R699), King Saud University, Riyadh, Saudi Arabia.