Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda-Lucas-Tomasi Tracker and Denavit-Hartenberg-Based Kinematic Model

Sensors (Basel). 2022 Oct 29;22(21):8318. doi: 10.3390/s22218318.

Abstract

In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and behavior planning. The realization, however, is left to a limited set of static 3D representations of conversational expressions. In addition to functional and semantic synchrony between verbal and non-verbal signals, the final believability of the displayed expression is sculpted by the physical realization of non-verbal expressions. A major challenge of most conversational systems capable of reproducing gestures is the diversity in expressiveness. In this paper, we propose a method for capturing gestures automatically from videos and transforming them into 3D representations stored as part of the conversational agent's repository of motor skills. The main advantage of the proposed method is ensuring the naturalness of the embodied conversational agent's gestures, which results in a higher quality of human-computer interaction. The method is based on a Kanade-Lucas-Tomasi tracker, a Savitzky-Golay filter, a Denavit-Hartenberg-based kinematic model and the EVA framework. Furthermore, we designed an objective method based on cosine similarity instead of a subjective evaluation of synthesized movement. The proposed method resulted in a 96% similarity.

Keywords: 3D gestures; Denavit–Hartenberg; Kanade–Lucas–Tomasi tracker; conversational gestures; embodied conversational agents; gesture reconstruction; kinematics; motor skills.

MeSH terms

  • Biomechanical Phenomena
  • Gestures*
  • Humans
  • Motor Skills
  • Semantics
  • Speech* / physiology