Zero-Shot Video Object Segmentation With Co-Attention Siamese Networks

IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2228-2242. doi: 10.1109/TPAMI.2020.3040258. Epub 2022 Mar 4.

Abstract

We introduce a novel network, called CO-attention siamese network (COSNet), to address the zero-shot video object segmentation task in a holistic fashion. We exploit the inherent correlation among video frames and incorporate a global co-attention mechanism to further improve the state-of-the-art deep learning based solutions that primarily focus on learning discriminative foreground representations over appearance and motion in short-term temporal segments. The co-attention layers in COSNet provide efficient and competent stages for capturing global correlations and scene context by jointly computing and appending co-attention responses into a joint feature space. COSNet is a unified and end-to-end trainable framework where different co-attention variants can be derived for capturing diverse properties of the learned joint feature space. We train COSNet with pairs (or groups) of video frames, and this naturally augments training data and allows increased learning capacity. During the segmentation stage, the co-attention model encodes useful information by processing multiple reference frames together, which is leveraged to infer the frequently reappearing and salient foreground objects better. Our extensive experiments over three large benchmarks demonstrate that COSNet outperforms the current alternatives by a large margin. Our implementations are available at https://github.com/carrierlxk/COSNet.