Learning Long-term Structural Dependencies for Video Salient Object Detection

IEEE Trans Image Process. 2020 Sep 17:PP. doi: 10.1109/TIP.2020.3023591. Online ahead of print.

Abstract

Existing video salient object detection (VSOD) methods focus on exploring either short-term or long-term temporal information. However, temporal information is exploited in a global frame-level or regular grid structure, neglecting interframe structural dependencies. In this paper, we propose to learn long-term structural dependencies with a structure-evolving graph convolutional network (GCN). Particularly, we construct a graph for the entire video using a fast supervoxel segmentation method, in which each node is connected according to spatio-temporal structural similarity. We infer the inter-frame structural dependencies of salient object using convolutional operations on the graph. To prune redundant connections in the graph and better adapt to the moving salient object, we present an adaptive graph pooling to evolve the structure of the graph by dynamically merging similar nodes, learning better hierarchical representations of the graph. Experiments on six public datasets show that our method outperforms all other state-of-the-art methods. Furthermore, We also demonstrate that our proposed adaptive graph pooling can effectively improve the supervoxel algorithm in the term of segmentation accuracy.