A Generative Model to Embed Human Expressivity into Robot Motions

Sensors (Basel). 2024 Jan 16;24(2):569. doi: 10.3390/s24020569.

Abstract

This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot's linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs.

Keywords: human factors; human-centered robotics; human-in-the-loop; human–robot interaction.

MeSH terms

  • Acceleration
  • Cues
  • Humans
  • Motion
  • Movement
  • Robotics*