Grounded language acquisition through the eyes and ears of a single child

Science. 2024 Feb 2;383(6682):504-511. doi: 10.1126/science.adi1374. Epub 2024 Feb 1.

Abstract

Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child's everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child's input.

MeSH terms

  • Child
  • Ear*
  • Eye*
  • Humans
  • Knowledge
  • Language Development*
  • Linguistics*
  • Neural Networks, Computer
  • Supervised Machine Learning*
  • Video Recording