Spatial-frequency-temporal convolutional recurrent network for olfactory-enhanced EEG emotion recognition

J Neurosci Methods. 2022 Jul 1:376:109624. doi: 10.1016/j.jneumeth.2022.109624. Epub 2022 May 16.

Abstract

Background: Multimedia stimulation of brain activity is important for emotion induction. Based on brain activity, emotion recognition using EEG signals has become a hot issue in the field of affective computing.

New method: In this paper, we develop a noval odor-video elicited physiological signal database (OVPD), in which we collect the EEG signals from eight participants in positive, neutral and negative emotional states when they are stimulated by synchronizing traditional video content with the odors. To make full use of the EEG features from different domains, we design a 3DCNN-BiLSTM model combining convolutional neural network (CNN) and bidirectional long short term memory (BiLSTM) for EEG emotion recognition. First, we transform EEG signals into 4D representations that retain spatial, frequency and temporal information. Then, the representations are fed into the 3DCNN-BiLSTM model to recognize emotions. CNN is applied to learn spatial and frequency information from the 4D representations. BiLSTM is designed to extract forward and backward temporal dependences.

Results: We conduct 5-fold cross validation experiments five times on the OVPD dataset to evaluate the performance of the model. The experimental results show that our presented model achieves an average accuracy of 98.29% with the standard deviation of 0.72% under the olfactory-enhanced video stimuli, and an average accuracy of 98.03% with the standard deviation of 0.73% under the traditional video stimuli on the OVPD dataset in the three-class classification of positive, neutral and negative emotions. To verify the generalisability of our proposed model, we also evaluate this approach on the public EEG emotion dataset (SEED).

Comparison with existing method: Compared with other baseline methods, our designed model achieves better recognition performance on the OVPD dataset. The average accuracy of positive, neutral and negative emotions is better in response to the olfactory-enhanced videos than the pure videos for the 3DCNN-BiLSTM model and other baseline methods.

Conclusion: The proposed 3DCNN-BiLSTM model is effective by fusing the spatial-frequency-temporal features of EEG signals for emotion recognition. The provided olfactory stimuli can induce stronger emotions than traditional video stimuli and improve the accuracy of emotion recognition to a certain extent. However, superimposing odors unrelated to the video scenes may distract participants' attention, and thus reduce the final accuracy of EEG emotion recognition.

Keywords: Convolutional recurrent neural network; EEG; Emotion recognition; Odor-Video stimulation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention
  • Electroencephalography*
  • Emotions
  • Humans
  • Memory, Long-Term
  • Neural Networks, Computer*