Facial Emotion Recognition in Verbal Communication Based on Deep Learning

Sensors (Basel). 2022 Aug 16;22(16):6105. doi: 10.3390/s22166105.

Abstract

Facial emotion recognition from facial images is considered a challenging task due to the unpredictable nature of human facial expressions. The current literature on emotion classification has achieved high performance over deep learning (DL)-based models. However, the issue of performance degradation occurs in these models due to the poor selection of layers in the convolutional neural network (CNN) model. To address this issue, we propose an efficient DL technique using a CNN model to classify emotions from facial images. The proposed algorithm is an improved network architecture of its kind developed to process aggregated expressions produced by the Viola-Jones (VJ) face detector. The internal architecture of the proposed model was finalised after performing a set of experiments to determine the optimal model. The results of this work were generated through subjective and objective performance. An analysis of the results presented herein establishes the reliability of each type of emotion, along with its intensity and classification. The proposed model is benchmarked against state-of-the-art techniques and evaluated on the FER-2013, CK+, and KDEF datasets. The utility of these findings lies in their application by law-enforcing bodies in smart cities.

Keywords: CNN; deep learning; facial expression recognition; law enforcement; smart cities; smart security; verbal communication.

MeSH terms

  • Deep Learning*
  • Emotions
  • Facial Expression
  • Facial Recognition*
  • Humans
  • Neural Networks, Computer
  • Reproducibility of Results

Grants and funding

This research received no external funding.