Multisource Transfer Learning for Cross-Subject EEG Emotion Recognition

IEEE Trans Cybern. 2020 Jul;50(7):3281-3293. doi: 10.1109/TCYB.2019.2904052. Epub 2019 Mar 27.

Abstract

Electroencephalogram (EEG) has been widely used in emotion recognition due to its high temporal resolution and reliability. Since the individual differences of EEG are large, the emotion recognition models could not be shared across persons, and we need to collect new labeled data to train personal models for new users. In some applications, we hope to acquire models for new persons as fast as possible, and reduce the demand for the labeled data amount. To achieve this goal, we propose a multisource transfer learning method, where existing persons are sources, and the new person is the target. The target data are divided into calibration sessions for training and subsequent sessions for test. The first stage of the method is source selection aimed at locating appropriate sources. The second is style transfer mapping, which reduces the EEG differences between the target and each source. We use few labeled data in the calibration sessions to conduct source selection and style transfer. Finally, we integrate the source models to recognize emotions in the subsequent sessions. The experimental results show that the three-category classification accuracy on benchmark SEED improves by 12.72% comparing with the nontransfer method. Our method facilitates the fast deployment of emotion recognition models by reducing the reliance on the labeled data amount, which has practical significance especially in fast-deployment scenarios.

MeSH terms

  • Adult
  • Brain / physiology
  • Brain-Computer Interfaces
  • Electroencephalography / classification*
  • Electroencephalography / methods*
  • Emotions / classification*
  • Humans
  • Machine Learning*
  • Pattern Recognition, Automated / methods
  • Young Adult