Human action recognition based on kinematic similarity in real time

PLoS One. 2017 Oct 26;12(10):e0185719. doi: 10.1371/journal.pone.0185719. eCollection 2017.

Abstract

Human action recognition using 3D pose data has gained a growing interest in the field of computer robotic interfaces and pattern recognition since the availability of hardware to capture human pose. In this paper, we propose a fast, simple, and powerful method of human action recognition based on human kinematic similarity. The key to this method is that the action descriptor consists of joints position, angular velocity and angular acceleration, which can meet the different individual sizes and eliminate the complex normalization. The angular parameters of joints within a short sliding time window (approximately 5 frames) around the current frame are used to express each pose frame of human action sequence. Moreover, three modified KNN (k-nearest-neighbors algorithm) classifiers are employed in our method: one for achieving the confidence of every frame in the training step, one for estimating the frame label of each descriptor, and one for classifying actions. Additional estimating of the frame's time label makes it possible to address single input frames. This approach can be used on difficult, unsegmented sequences. The proposed method is efficient and can be run in real time. The research shows that many public datasets are irregularly segmented, and a simple method is provided to regularize the datasets. The approach is tested on some challenging datasets such as MSR-Action3D, MSRDailyActivity3D, and UTD-MHAD. The results indicate our method achieves a higher accuracy.

MeSH terms

  • Algorithms
  • Biomechanical Phenomena*
  • Humans
  • Pattern Recognition, Automated*

Grants and funding

This work is supported by National Natural Science Foundation of China (http://www.nsfc.gov.cn/). The grant number is 91420301. GX received the funding. The funder had a role in study design.