Deep Action Parsing in Videos With Large-Scale Synthesized Data

IEEE Trans Image Process. 2018 Jun;27(6):2869-2882. doi: 10.1109/TIP.2018.2813530.

Abstract

Action parsing in videos with complex scenes is an interesting but challenging task in computer vision. In this paper, we propose a generic 3D convolutional neural network in a multi-task learning manner for effective Deep Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase, action localization, classification, and attributes learning can be jointly optimized on our appearance-motion data via DAP3D-Net. For an upcoming test video, we can describe each individual action in the video simultaneously as: Where the action occurs, What the action is, and How the action is performed. To well demonstrate the effectiveness of the proposed DAP3D-Net, we also contribute a new Numerous-category Aligned Synthetic Action data set, i.e., NASA, which consists of 200 000 action clips of over 300 categories and with 33 pre-defined action attributes in two hierarchical levels (i.e., low-level attributes of basic body part movements and high-level attributes related to action motion). We learn DAP3D-Net using the NASA data set and then evaluate it on our collected Human Action Understanding data set and the public THUMOS data set. Experimental results show that our approach can accurately localize, categorize, and describe multiple actions in realistic videos.