Is there a dynamic advantage for facial expressions?

J Vis. 2011 Mar 22;11(3):17. doi: 10.1167/11.3.17.

Abstract

Some evidence suggests that it is easier to identify facial expressions (FEs) shown as dynamic displays than as photographs (dynamic advantage hypothesis). Previously, this has been tested by using dynamic FEs simulated either by morphing a neutral face into an emotional one or by computer animations. For the first time, we tested the dynamic advantage hypothesis by using high-speed recordings of actors' FEs. In the dynamic condition, stimuli were graded blends of two recordings (duration: 4.18 s), each describing the unfolding of an expression from neutral to apex. In the static condition, stimuli (duration: 3 s) were blends of just the apex of the same recordings. Stimuli for both conditions were generated by linearly morphing one expression into the other. Performance was estimated by a forced-choice task asking participants to identify which prototype the morphed stimulus was more similar to. Identification accuracy was not different between conditions. Response times (RTs) measured from stimulus onset were shorter for static than for dynamic stimuli. Yet, most responses to dynamic stimuli were given before expressions reached their apex. Thus, with a threshold model, we tested whether discriminative information is integrated more effectively in dynamic than in static conditions. We did not find any systematic difference. In short, neither identification accuracy nor RTs supported the dynamic advantage hypothesis.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Choice Behavior
  • Emotions*
  • Face / physiology*
  • Facial Expression*
  • Female
  • Humans
  • Male
  • Movement / physiology*
  • Pattern Recognition, Visual*
  • Psychometrics
  • Reaction Time
  • Young Adult