Children's integration of speech and pointing gestures in comprehension

J Exp Child Psychol. 1994 Jun;57(3):327-54. doi: 10.1006/jecp.1994.1016.

Abstract

We examined 4- and 9-year-old's referential comprehension when given pointing gestures and spoken labels, in two types of contextually ambiguous situations. In one situation, speech/gesture discordance was produced in conditions where labels for one of four objects being referred to sounded either alike, or different from each other. In the other, the contextual set contained the same two objects, and ambiguity was produced by factorially combining speech on a continuum ranging between /bcl/ and /dcl/ with a pointing gesture from a continuum ranging between an unambiguous point to a ball and to a doll. Results showed that the speech modality had a far greater influence on word comprehension than gestures. Second, the influence of gestures was greater for the older children. Mathematical models of speech-gesture understanding were tested against the data. Selection models assume that one dimension of information is used on a given trial, and that the selection of a modality depends on the ambiguity of information encoded on the dominant dimension. The Fuzzy Logical Model of Perception (FLMP) assumes that both modalities are evaluated independently of one another and then integrated to achieve comprehension. The results from both age groups were best described by the assumptions of the FLMP. Results are related to general claims about perceptual development during childhood concerning the quality of representations formed and dimensional selectivity of visual-spoken language.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Auditory Perception
  • Child, Preschool
  • Cognition*
  • Female
  • Gestures*
  • Humans
  • Infant
  • Language
  • Male
  • Phonetics
  • Speech Perception*
  • Visual Perception