Exploring Prosodic Features Modelling for Secondary Emotions Needed for Empathetic Speech Synthesis

Sensors (Basel). 2023 Mar 10;23(6):2999. doi: 10.3390/s23062999.

Abstract

A low-resource emotional speech synthesis system for empathetic speech synthesis based on modelling prosody features is presented here. Secondary emotions, identified to be needed for empathetic speech, are modelled and synthesised in this investigation. As secondary emotions are subtle in nature, they are difficult to model compared to primary emotions. This study is one of the few to model secondary emotions in speech as they have not been extensively studied so far. Current speech synthesis research uses large databases and deep learning techniques to develop emotion models. There are many secondary emotions, and hence, developing large databases for each of the secondary emotions is expensive. Hence, this research presents a proof of concept using handcrafted feature extraction and modelling of these features using a low-resource-intensive machine learning approach, thus creating synthetic speech with secondary emotions. Here, a quantitative-model-based transformation is used to shape the emotional speech's fundamental frequency contour. Speech rate and mean intensity are modelled via rule-based approaches. Using these models, an emotional text-to-speech synthesis system to synthesise five secondary emotions-anxious, apologetic, confident, enthusiastic and worried-is developed. A perception test to evaluate the synthesised emotional speech is also conducted. The participants could identify the correct emotion in a forced response test with a hit rate greater than 65%.

Keywords: Fujisaki model; emotional speech synthesis; empathetic speech; fundamental frequency contour; low resource; secondary emotions.

MeSH terms

  • Anxiety
  • Emotions / physiology
  • Humans
  • Speech Perception* / physiology
  • Speech*

Grants and funding

This research was funded by the University of Auckland Postgraduate Research Student Support fund.