Markerless motion capture using appearance and inertial data

Annu Int Conf IEEE Eng Med Biol Soc. 2014:2014:6907-10. doi: 10.1109/EMBC.2014.6945216.

Abstract

Current monitoring techniques for biomechanical analysis typically capture a snapshot of the state of the subject due to challenges associated with long-term monitoring. Continuous long-term capture of biomechanics can be used to assess performance in the workplace and rehabilitation at home. Noninvasive motion capture using small low-power wearable sensors and camera systems have been explored, however, drift and occlusions have limited their ability to reliably capture motion over long durations. In this paper, we propose to combine 3D pose estimation from inertial motion capture with 2D pose estimation from vision to obtain more robust posture tracking. To handle the changing appearance of the human body due to pose variations and illumination changes, our implementation is based upon Least Soft-Threshold Squares Tracking. Constraints on the variation of the appearance model and estimated pose from an inertial motion capture system are used to correct 2D and 3D estimates simultaneously. We evaluate the performance of our method with three state-of-the-art trackers, Incremental Visual Tracking, Multiple Instance Learning, and Least Soft-Threshold Squares Tracking. In our experiments, we track the movement of the upper limbs. While the results indicate an improvement in tracking accuracy at some joint locations, they also show that the result can be further improved. Conclusions and further work required to improve our results are discussed.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Biomechanical Phenomena
  • Humans
  • Image Processing, Computer-Assisted
  • Models, Biological
  • Movement
  • Posture
  • Upper Extremity / physiology*
  • Video Recording