Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations

Cogn Emot. 2024 May;38(3):277-295. doi: 10.1080/02699931.2023.2285854. Epub 2023 Nov 24.

Abstract

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

Keywords: Behavioural context; evolution; nonverbal communication; vocalisation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Auditory Perception*
  • Female
  • Humans
  • Male
  • Nonverbal Communication* / psychology
  • Social Perception
  • Sound
  • Young Adult

Grants and funding

R.G.K. and D.A.S. are supported by ERC Starting grant no. 714977 awarded to D.A.S.