Monaural speech segregation using synthetic speech signals

J Acoust Soc Am. 2006 Apr;119(4):2327-33. doi: 10.1121/1.2170030.

Abstract

When listening to natural speech, listeners are fairly adept at using cues such as pitch, vocal tract length, prosody, and level differences to extract a target speech signal from an interfering speech masker. However, little is known about the cues that listeners might use to segregate synthetic speech signals that retain the intelligibility characteristics of speech but lack many of the features that listeners normally use to segregate competing talkers. In this experiment, intelligibility was measured in a diotic listening task that required the segregation of two simultaneously presented synthetic sentences. Three types of synthetic signals were created: (1) sine-wave speech (SWS); (2) modulated noise-band speech (MNB); and (3) modulated sine-band speech (MSB). The listeners performed worse for all three types of synthetic signals than they did with natural speech signals, particularly at low signal-to-noise ratio (SNR) values. Of the three synthetic signals, the results indicate that SWS signals preserve more of the voice characteristics used for speech segregation than MNB and MSB signals. These findings have implications for cochlear implant users, who rely on signals very similar to MNB speech and thus are likely to have difficulty understanding speech in cocktail-party listening environments.

MeSH terms

  • Adult
  • Attention*
  • Communication Aids for Disabled*
  • Female
  • Functional Laterality*
  • Humans
  • Male
  • Middle Aged
  • Perceptual Masking*
  • Sound Spectrography
  • Speech Acoustics*
  • Speech Intelligibility
  • Speech Perception*
  • Voice Quality