Learnability of prosodic boundaries: Is infant-directed speech easier?

J Acoust Soc Am. 2016 Aug;140(2):1239. doi: 10.1121/1.4960576.

Abstract

This study explores the long-standing hypothesis that the acoustic cues to prosodic boundaries in infant-directed speech (IDS) make those boundaries easier to learn than those in adult-directed speech (ADS). Three cues (pause duration, nucleus duration, and pitch change) were investigated, by means of a systematic review of the literature, statistical analyses of a corpus of Japanese, and machine learning experiments. The review of previous work revealed that the effect of register on boundary cues is less well established than previously thought, and that results often vary across studies for certain cues. Statistical analyses run on a large database of mother-child and mother-interviewer interactions showed that the duration of a pause and the duration of the syllable nucleus preceding the boundary are two cues which are enhanced in IDS, while f0 change is actually degraded in IDS. Supervised and unsupervised machine learning techniques applied to these acoustic cues revealed that IDS boundaries were consistently better classified than ADS ones, regardless of the learning method used. The role of the cues examined in this study and the importance of these findings in the more general context of early linguistic structure acquisition is discussed.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review
  • Systematic Review

MeSH terms

  • Age Factors
  • Child Language*
  • Cues*
  • Female
  • Humans
  • Infant
  • Mothers
  • Speech
  • Speech Acoustics
  • Speech Perception
  • Supervised Machine Learning
  • Unsupervised Machine Learning