A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition

PLoS One. 2016 Sep 16;11(9):e0162702. doi: 10.1371/journal.pone.0162702. eCollection 2016.

Abstract

Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.

Publication types

  • Validation Study

MeSH terms

  • Entropy*
  • Facial Expression*
  • Facial Recognition*
  • Humans
  • Markov Chains*
  • Models, Theoretical*

Grants and funding

This research was supported by the MSIP, Korea, under the G-ITRC support program (IITP-2015-R6812- 15-0001) supervised by the IITP, and by the Priority Research Centers Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2010- 0020210).