Fusion-ConvBERT: Parallel Convolution and BERT Fusion for Speech Emotion Recognition

Sensors (Basel). 2020 Nov 23;20(22):6688. doi: 10.3390/s20226688.

Abstract

Speech emotion recognition predicts the emotional state of a speaker based on the person's speech. It brings an additional element for creating more natural human-computer interactions. Earlier studies on emotional recognition have been primarily based on handcrafted features and manual labels. With the advent of deep learning, there have been some efforts in applying the deep-network-based approach to the problem of emotion recognition. As deep learning automatically extracts salient features correlated to speaker emotion, it brings certain advantages over the handcrafted-feature-based methods. There are, however, some challenges in applying them to the emotion recognition problem, because data required for properly training deep networks are often lacking. Therefore, there is a need for a new deep-learning-based approach which can exploit available information from given speech signals to the maximum extent possible. Our proposed method, called "Fusion-ConvBERT", is a parallel fusion model consisting of bidirectional encoder representations from transformers and convolutional neural networks. Extensive experiments were conducted on the proposed model using the EMO-DB and Interactive Emotional Dyadic Motion Capture Database emotion corpus, and it was shown that the proposed method outperformed state-of-the-art techniques in most of the test configurations.

Keywords: bidirectional encoder representations from transformers (BERT); convolutional neural networks (CNNs); fusion model; representation; spatiotemporal representation; speech emotion recognition; transformer.

MeSH terms

  • Emotions*
  • Humans
  • Neural Networks, Computer*
  • Speech*