An Urdu speech corpus for emotion recognition

PeerJ Comput Sci. 2022 May 9:8:e954. doi: 10.7717/peerj-cs.954. eCollection 2022.

Abstract

Emotion recognition from acoustic signals plays a vital role in the field of audio and speech processing. Speech interfaces offer humans an informal and comfortable means to communicate with machines. Emotion recognition from speech signals has a variety of applications in the area of human computer interaction (HCI) and human behavior analysis. In this work, we develop the first emotional speech database of the Urdu language. We also develop the system to classify five different emotions: sadness, happiness, neutral, disgust, and anger using different machine learning algorithms. The Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coefficient (LPC), energy, spectral flux, spectral centroid, spectral roll-off, and zero-crossing were used as speech descriptors. The classification tests were performed on the emotional speech corpus collected from 20 different subjects. To evaluate the quality of speech emotions, subjective listing tests were conducted. The recognition of correctly classified emotions in the complete Urdu emotional speech corpus was 66.5% with K-nearest neighbors. It was found that the disgust emotion has a lower recognition rate as compared to the other emotions. Removing the disgust emotion significantly improves the performance of the classifier to 76.5%.

Keywords: Emotion recognition; Human behavior analysis; Human computer interaction; Linear prediction coefficient (LPC); Machine learning algorithms; Mel frequency capstrum coefficient (MFCC); Speech descriptors; Urdu.

Grants and funding

The authors received no funding for this work.