Impact of Activation, Optimization, and Regularization Methods on the Facial Expression Model Using CNN

Comput Intell Neurosci. 2022 Jun 16:2022:3098604. doi: 10.1155/2022/3098604. eCollection 2022.

Abstract

When it comes to conveying sentiments and thoughts, facial expressions are quite effective. For human-computer collaboration, data-driven animation, and communication between humans and robots to be successful, the capacity to recognize emotional states in facial expressions must be developed and implemented. Recently published studies have found that deep learning is becoming increasingly popular in the field of image categorization. As a result, to resolve the problem of facial expression recognition (FER) using convolutional neural networks (CNN), increasingly substantial efforts have been made in recent years. Facial expressions may be acquired from databases like CK+ and JAFFE using this novel FER technique based on activations, optimizations, and regularization parameters. The model recognized emotions such as happiness, sadness, surprise, fear, anger, disgust, and neutrality. The performance of the model was evaluated using a variety of methodologies, including activation, optimization, and regularization, as well as other hyperparameters, as detailed in this study. In experiments, the FER technique may be used to recognize emotions with an Adam, Softmax, and Dropout Ratio of 0.1 to 0.2 when combined with other techniques. It also outperforms current FER techniques that rely on handcrafted features and only one channel, as well as has superior network performance compared to the present state-of-the-art techniques.

Publication types

  • Retracted Publication

MeSH terms

  • Anger
  • Emotions / physiology
  • Facial Expression*
  • Facial Recognition*
  • Humans
  • Neural Networks, Computer