Effects of speaker emotional facial expression and listener age on incremental sentence processing

PLoS One. 2013 Sep 6;8(9):e72559. doi: 10.1371/journal.pone.0072559. eCollection 2013.

Abstract

We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Age Factors
  • Aged
  • Attention*
  • Facial Expression*
  • Female
  • Fixation, Ocular
  • Gestures
  • Happiness
  • Humans
  • Male
  • Middle Aged
  • Reaction Time
  • Speech
  • Speech Discrimination Tests
  • Visual Perception
  • Young Adult

Grants and funding

This research was supported by the German Research Foundation (DFG; http://www.dfg.de/) within the SFB-673 ‘Alignment in Communication’ – Project A1-Modelling Partners (http://www.sfb673.org/home), and by the Cognitive Interaction Technology Excellence Center (http://www.cit-ec.de/). The authors also acknowledge support for the Article Processing Charge by the Deutsche Forschungsgemeinschaft and the Open Access Publication Funds of Bielefeld University Library. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.