Compositional action recognition with multi-view feature fusion

PLoS One. 2022 Apr 14;17(4):e0266259. doi: 10.1371/journal.pone.0266259. eCollection 2022.

Abstract

Most action recognition tasks now treat the activity as a single event in a video clip. Recently, the benefits of representing activities as a combination of verbs and nouns for action recognition have shown to be effective in improving action understanding, allowing us to capture such representations. However, there is still a lack of research on representational learning using cross-view or cross-modality information. To exploit the complementary information between multiple views, we propose a feature fusion framework, and our framework is divided into two steps: extraction of appearance features and fusion of multi-view features. We validate our approach on two action recognition datasets, IKEA ASM and LEMMA. We demonstrate that multi-view fusion can effectively generalize across appearances and identify previously unseen actions of interacting objects, surpassing current state-of-the-art methods. In particular, on the IKEA ASM dataset, the performance of the multi-view fusion approach improves 18.1% over the performance of the single-view approach on top-1.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Gene Fusion
  • Learning*
  • Recognition, Psychology*

Associated data

  • figshare/10.6084/m9.figshare.19410164
  • figshare/10.6084/m9.figshare.19410170

Grants and funding

This work was supported by the National Natural Science Foundation of China under Grant 62072246.