Auditory Discrimination Elicited by Nonspeech and Speech Stimuli in Children With Congenital Hearing Loss

J Speech Lang Hear Res. 2022 Oct 17;65(10):3981-3995. doi: 10.1044/2022_JSLHR-22-00008. Epub 2022 Sep 12.

Abstract

Purpose: Congenital deafness not only delays auditory development but also hampers the ability to perceive nonspeech and speech signals. This study aimed to use auditory event-related potentials to explore the mismatch negativity (MMN), P3a, negative wave (Nc), and late discriminative negativity (LDN) components in children with and without hearing loss.

Method: Nineteen children with normal hearing (CNH) and 17 children with hearing loss (CHL) participated in this study. Two sets of pure tones (1 kHz vs. 1.1 kHz) and lexical tones (/ba2/ vs. /ba4/) were used to examine the auditory discrimination process.

Results: MMN could be elicited by the pure tone and the lexical tone in both groups. The MMN latency elicited by nonspeech and speech was later in CHL than in CNH. Additionally, the MMN latency induced by speech occurred later in the left than in the right hemisphere in CNH, and the MMN amplitude elicited by speech in CHL produced a discriminative deficiency compared with that in CNH. Although the P3a latency and amplitude elicited by nonspeech in CHL and CNH were not significantly different, the Nc amplitude elicited by speech performed much lower in CHL than in CNH. Furthermore, the LDN latency elicited by nonspeech was later in CHL than in CNH, and the LDN amplitude induced by speech showed higher dominance in the right hemisphere in both CNH and CHL.

Conclusion: By incorporating nonspeech and speech auditory conditions, we propose using MMN, Nc, and LDN as potential indices to investigate auditory perception, memory, and discrimination.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Auditory Perception
  • Child
  • Deafness*
  • Electroencephalography
  • Evoked Potentials, Auditory
  • Hearing Loss, Sensorineural*
  • Humans
  • Speech
  • Speech Perception*

Grants and funding

This research was funded by the Humanities and Social Sciences Youth Foundation, Ministry of Education of the People's Republic of China (18YJC740128); the National Natural Science Foundation of Shandong Province (ZR2021MC052); the Postgraduate Education Quality Improvement Plan in Shandong Province, China (SDYAL19164); and the Research Start-up Fund Project of Binzhou Medical University, China (BY2017KYQD05), all awarded to Ying Yang. This study was partially supported by the National Natural Science Foundation of China under Grants 81530030 and 81873697, awarded to Qingyin Zheng and Bo Li, respectively; the National Institute on Deafness and Other Communication Disorders under Grant R01DC015111, awarded to Qingyin Zheng; and the Taishan Scholar Foundation.