Sparse Simultaneous Recurrent Deep Learning for Robust Facial Expression Recognition

IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):4905-4916. doi: 10.1109/TNNLS.2017.2776248. Epub 2018 Jan 5.

Abstract

Facial expression recognition is a challenging task that involves detection and interpretation of complex and subtle changes in facial muscles. Recent advances in feed-forward deep neural networks (DNNs) have offered improved object recognition performance. Sparse feature learning in feed-forward DNN models offers further improvement in performance when compared to the earlier handcrafted techniques. However, the depth of the feed-forward DNNs and the computational complexity of the models increase proportional to the challenges posed by the facial expression recognition problem. The feed-forward DNN architectures do not exploit another important learning paradigm, known as recurrency, which is ubiquitous in the human visual system. Consequently, this paper proposes a novel biologically relevant sparse-deep simultaneous recurrent network (S-DSRN) for robust facial expression recognition. The feature sparsity is obtained by adopting dropout learning in the proposed DSRN as opposed to usual handcrafting of additional penalty terms for the sparse representation of data. Theoretical analysis of S-DSRN shows that the dropout learning offers desirable properties such as sparsity, and prevents the model from overfitting. Experimental results also suggest that the proposed method yields better performance accuracy, requires reduced number of parameters, and offers reduced computational complexity than that of the previously reported state-of-the-art feed-forward DNNs using two of the most widely used publicly available facial expression data sets. Furthermore, we show that by combining the proposed neural architecture with a state-of-the-art metric learning technique significantly improves the overall recognition performance. Finally, a graphical processing unit (GPU)-based implementation of S-DSRN is obtained for real-time applications.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.