EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners

Cereb Cortex. 2023 Nov 4;33(22):10972-10983. doi: 10.1093/cercor/bhad325.

Abstract

Auditory attention decoding (AAD) was used to determine the attended speaker during an auditory selective attention task. However, the auditory factors modulating AAD remained unclear for hearing-impaired (HI) listeners. In this study, scalp electroencephalogram (EEG) was recorded with an auditory selective attention paradigm, in which HI listeners were instructed to attend one of the two simultaneous speech streams with or without congruent visual input (articulation movements), and at a high or low target-to-masker ratio (TMR). Meanwhile, behavioral hearing tests (i.e. audiogram, speech reception threshold, temporal modulation transfer function) were used to assess listeners' individual auditory abilities. The results showed that both visual input and increasing TMR could significantly enhance the cortical tracking of the attended speech and AAD accuracy. Further analysis revealed that the audiovisual (AV) gain in attended speech cortical tracking was significantly correlated with listeners' auditory amplitude modulation (AM) sensitivity, and the TMR gain in attended speech cortical tracking was significantly correlated with listeners' hearing thresholds. Temporal response function analysis revealed that subjects with higher AM sensitivity demonstrated more AV gain over the right occipitotemporal and bilateral frontocentral scalp electrodes.

Keywords: EEG; audiovisual speech; auditory attention decoding; hearing impairment; speech-in-noise.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention / physiology
  • Auditory Threshold / physiology
  • Electroencephalography
  • Hearing / physiology
  • Hearing Loss*
  • Humans
  • Speech
  • Speech Perception* / physiology