A Novel Domain Adversarial Networks Based on 3D-LSTM and Local Domain Discriminator for Hearing-Impaired Emotion Recognition

IEEE J Biomed Health Inform. 2023 Jan;27(1):363-373. doi: 10.1109/JBHI.2022.3212475. Epub 2023 Jan 4.

Abstract

Recent research on emotion recognition suggests that deep network-based adversarial learning has an ability to solve the cross-subject problem of emotion recognition. This study constructed a hearing-impaired electroencephalography (EEG) emotion dataset containing three emotions (positive, neutral, and negative) in 15 subjects. The emotional domain adversarial neural network (EDANN) was carried out to identify hearing-impaired subjects' emotions by learning hidden emotion information between the labeled data and the data with no-label. For the input data, we propose a spatial filter matrix to reduce the overfitting of the training data. A feature extraction network 3DLSTM-ConvNET was used to extract comprehensive emotional information from the time, frequency, and spatial dimensions. Moreover, emotion local domain discriminator and emotion film group local domain discriminator were added to reduce the distribution distance between the same kinds of emotions and different film groups, respectively. According to the experimental results, the average accuracy of subject-dependent is 0.984 (STD: 0.011), and that of subject-independent is 0.679 (STD: 0.140). In addition, by analyzing the discrimination characteristics, we found that the brain regions with emotional recognition in the hearing-impaired are distributed in the wider areas of the parietal and occipital lobes, which may be caused by visual processing.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Brain
  • Electroencephalography / methods
  • Emotions*
  • Hearing
  • Humans
  • Nerve Net
  • Persons With Hearing Impairments* / psychology