STAC: Spatial-Temporal Attention on Compensation Information for Activity Recognition in FPV

Sensors (Basel). 2021 Feb 5;21(4):1106. doi: 10.3390/s21041106.

Abstract

Egocentric activity recognition in first-person video (FPV) requires fine-grained matching of the camera wearer's action and the objects being operated. The traditional method used for third-person action recognition does not suffice because of (1) the background ego-noise introduced by the unstructured movement of the wearable devices caused by body movement; (2) the small-sized and fine-grained objects with single scale in FPV. Size compensation is performed to augment the data. It generates a multi-scale set of regions, including multi-size objects, leading to superior performance. We compensate for the optical flow to eliminate the camera noise in motion. We developed a novel two-stream convolutional neural network-recurrent attention neural network (CNN-RAN) architecture: spatial temporal attention on compensation information (STAC), able to generate generic descriptors under weak supervision and focus on the locations of activated objects and the capture of effective motion. We encode the RGB features using a spatial location-aware attention mechanism to guide the representation of visual features. Similar location-aware channel attention is applied to the temporal stream in the form of stacked optical flow to implicitly select the relevant frames and pay attention to where the action occurs. The two streams are complementary since one is object-centric and the other focuses on the motion. We conducted extensive ablation analysis to validate the complementarity and effectiveness of our STAC model qualitatively and quantitatively. It achieved state-of-the-art performance on two egocentric datasets.

Keywords: compensation information; egocentric video analysis; fine-grained activity recognition; location-aware attention.

MeSH terms

  • Attention
  • Humans
  • Image Processing, Computer-Assisted*
  • Motion
  • Neural Networks, Computer*