Multi-domain Encoding of Spatiotemporal Dynamics in EEG for Emotion Recognition

IEEE J Biomed Health Inform. 2022 Dec 27:PP. doi: 10.1109/JBHI.2022.3232497. Online ahead of print.

Abstract

The common goal of the studies is to map any emotional states encoded from electroencephalogram (EEG) into 2-dimensional arousal-valance scores. It is still challenging due to each emotion having its specific spatial structure and dynamic dependence over the distinct time segments among EEG signals. This paper aims to model human dynamic emotional behavior by considering the location connectivity and context dependency of brain electrodes. Thus, we designed a hybrid EEG modeling method that mainly adopts the attention mechanism, combining a multi-domain spatial transformer (MST) module and a dynamic temporal transformer (DTT) module, named MSDTTs. Specifically, the MST module extracts single-domain and cross-domain features from different brain regions and fuses them into multi-domain spatial features. Meanwhile, the temporal dynamic excitation (TDE) is inserted into the multi-head convolutional transformer to form the DTT module. These two blocks work together to activate and extract the emotion-related dynamic temporal features within the DTT module. Furthermore, we place the convolutional mapping into the transformer structure to mine the static context features among the keyframes. Overall results show that high classification accuracy of 98.91%/0.14% was obtained by the β frequency band of the DEAP dataset, and 97.52%/0.12% and 96.70%/0.26% were obtained by the γ frequency band of SEED and SEED-IV datasets. Empirical experiments indicate that our proposed method can achieve remarkable results in comparison with state-of-the-art algorithms.