Modelling multimodal expression of emotion in a virtual agent

Philos Trans R Soc Lond B Biol Sci. 2009 Dec 12;364(1535):3539-48. doi: 10.1098/rstb.2009.0186.

Abstract

Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Computer Simulation
  • Emotions / physiology*
  • Facial Expression*
  • Humans
  • Models, Psychological*
  • Social Behavior