The use of lexical semantics for processing face-masked speech in monolinguals and bilinguals

J Acoust Soc Am. 2023 Aug 1;154(2):1202-1210. doi: 10.1121/10.0020723.

Abstract

Face masks impede visual and acoustic cues that help make speech processing and language comprehension more efficient. Many studies report this phenomenon, but few examined how listeners utilize semantic information to overcome the challenges posed by face masks. Fewer still investigated this impact on bilinguals' processing of face-masked speech [Smiljanic, Keerstock, Meemann, and Ransom, S. M. (2021). J. Acoust. Soc. Am. 149(6), 4013-4023; Truong, Beck, and Weber (2021). J. Acoust. Soc. Am. 149(1), 142-144]. Therefore, this study aims to determine how monolingual and bilingual listeners use semantic information to compensate for the loss of visual and acoustic information when the speaker is wearing a mask. A lexical priming experiment tested how monolingual listeners and early-acquiring simultaneous bilingual listeners responded to video of English word pairs. The prime-target pairs were either strongly related, weakly related, or unrelated and were both either masked or unmasked. Analyses of reaction time results showed an overall effect of masking in both groups and an effect of semantic association strength on processing masked and unmasked speech. However, speaker groups were not different; subsequent analyses of difference values showed no effect of semantic context. These results illustrate the limited role of word-level semantic information on processing in adverse listening conditions. Results are discussed in light of semantic processing at the sentence level.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustics
  • Language
  • Masks
  • Semantics*
  • Speech*