Fusion of Video and Inertial Sensing for Deep Learning-Based Human Action Recognition

Sensors (Basel). 2019 Aug 24;19(17):3680. doi: 10.3390/s19173680.

Abstract

This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered-Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach.

Keywords: decision-level and feature-level fusion for action recognition; deep learning-based action recognition; fusion of video and inertial sensing for action recognition.

MeSH terms

  • Algorithms
  • Deep Learning
  • Humans
  • Neural Networks, Computer
  • Video Recording*
  • Vision, Ocular / physiology*
  • Wearable Electronic Devices*