An Efficient Human Instance-Guided Framework for Video Action Recognition

Sensors (Basel). 2021 Dec 12;21(24):8309. doi: 10.3390/s21248309.

Abstract

In recent years, human action recognition has been studied by many computer vision researchers. Recent studies have attempted to use two-stream networks using appearance and motion features, but most of these approaches focused on clip-level video action recognition. In contrast to traditional methods which generally used entire images, we propose a new human instance-level video action recognition framework. In this framework, we represent the instance-level features using human boxes and keypoints, and our action region features are used as the inputs of the temporal action head network, which makes our framework more discriminative. We also propose novel temporal action head networks consisting of various modules, which reflect various temporal dynamics well. In the experiment, the proposed models achieve comparable performance with the state-of-the-art approaches on two challenging datasets. Furthermore, we evaluate the proposed features and networks to verify the effectiveness of them. Finally, we analyze the confusion matrix and visualize the recognized actions at human instance level when there are several people.

Keywords: convolutional neural network; human action recognition; human detection; multiple human tracking; temporal sequence analysis.

MeSH terms

  • Human Activities*
  • Humans
  • Motion
  • Neural Networks, Computer*
  • Recognition, Psychology
  • Vision, Ocular