Supporting One-Time Point Annotations for Gesture Recognition

IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2270-2283. doi: 10.1109/TPAMI.2016.2637350. Epub 2016 Dec 8.

Abstract

This paper investigates a new annotation technique that reduces significantly the amount of time to annotate training data for gesture recognition. Conventionally, the annotations comprise the start and end times, and the corresponding labels of gestures in sensor recordings. In this work, we propose a one-time point annotation in which labelers do not have to select the start and end time carefully, but just mark a one-time point within the time a gesture is happening. The technique gives more freedom and reduces significantly the burden for labelers. To make the one-time point annotations applicable, we propose a novel BoundarySearch algorithm to find automatically the correct temporal boundaries of gestures by discovering data patterns around their given one-time point annotations. The corrected annotations are then used to train gesture models. We evaluate the method on three applications from wearable gesture recognition with various gesture classes (10-17 classes) recorded with different sensor modalities. The results show that training on the corrected annotations can achieve performances close to a fully supervised training on clean annotations (lower by just up to 5 percent F1-score on average). Furthermore, the BoundarySearch algorithm is also evaluated on the ChaLearn 2014 multi-modal gesture recognition challenge recorded with Kinect sensors from computer vision and achieves similar results.

MeSH terms

  • Accelerometry
  • Algorithms
  • Gestures*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Pattern Recognition, Automated / methods*
  • Supervised Machine Learning*
  • Video Recording
  • Wearable Electronic Devices