Emotion Recognition from EEG and Facial Expressions: a Multimodal Approach

Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul:2018:530-533. doi: 10.1109/EMBC.2018.8512407.

Abstract

The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Databases, Factual
  • Electroencephalography*
  • Emotions*
  • Facial Expression*
  • Humans