Reverse control for humanoid robot task recognition

IEEE Trans Syst Man Cybern B Cybern. 2012 Dec;42(6):1524-37. doi: 10.1109/TSMCB.2012.2193614. Epub 2012 Apr 26.

Abstract

Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Cybernetics
  • Models, Theoretical
  • Motion*
  • Pattern Recognition, Automated / methods*
  • Robotics / instrumentation*
  • Robotics / methods*