Compensation for visually specified coarticulation in liquid-stop contexts

Atten Percept Psychophys. 2016 Nov;78(8):2341-2347. doi: 10.3758/s13414-016-1187-3.

Abstract

The question of whether speech perceivers use visual coarticulatory information in speech perception remains unanswered, despite numerous past studies. Across different coarticulatory contexts, studies have both detected (e.g., Mitterer in Perception & Psychophysics, 68, 1227-1240, 2006) and failed to detect (e.g., Vroomen & de Gelder in Language and Cognitive Processes, 16, 661-672. doi: 10.1080/01690960143000092 , 2001) visual effects. In this study, we focused on a liquid-stop coarticulatory context and attempted to resolve the contradictory findings of Fowler, Brown, and Mann (Journal of Experimental Psychology: Human Perception and Performance, 26, 877-888. doi: 10.1037/0096-1523.26.3.877 , 2000) and Holt, Stephens, and Lotto (Perception & Psychophysics, 67, 1102-1112. doi: 10.3758/BF03193635 , 2005). We used the original stimuli of Fowler et al. with modifications to the experimental paradigm to examine whether visual compensation can occur when acoustic coarticulatory information is absent (rather than merely ambiguous). We found that perceivers' categorizations of the target changed when coarticulatory information was presented visually using a silent precursor, suggesting that visually presented coarticulatory information can induce compensation. However, we failed to detect this effect when the same visual information was accompanied by an ambiguous auditory precursor, suggesting that these effects are weaker and less robust than auditory compensation. We discussed why this might be the case and examined implications for accounts of coarticulatory compensation.

Keywords: Audiovisual speech; Coarticulation; Speech perception.

MeSH terms

  • Adult
  • Humans
  • Speech / physiology*
  • Speech Perception / physiology*
  • Visual Perception / physiology*
  • Young Adult