Adaptive Part Mining for Robust Visual Tracking

IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):11443-11457. doi: 10.1109/TPAMI.2023.3275034. Epub 2023 Sep 5.

Abstract

Visual tracking aims to estimate object state in a video sequence, which is challenging when facing drastic appearance changes. Most existing trackers conduct tracking with divided parts to handle appearance variations. However, these trackers commonly divide target objects into regular patches by a hand-designed splitting way, which is too coarse to align object parts well. Besides, a fixed part detector is difficult to partition targets with arbitrary categories and deformations. To address the above issues, we propose a novel adaptive part mining tracker (APMT) for robust tracking via a transformer architecture, including an object representation encoder, an adaptive part mining decoder, and an object state estimation decoder. The proposed APMT enjoys several merits. First, in the object representation encoder, object representation is learned by distinguishing target object from background regions. Second, in the adaptive part mining decoder, we introduce multiple part prototypes to adaptively capture target parts through cross-attention mechanisms for arbitrary categories and deformations. Third, in the object state estimation decoder, we propose two novel strategies to effectively handle appearance variations and distractors. Extensive experimental results demonstrate that our APMT achieves promising results with high FPS. Notably, our tracker is ranked the first place in the VOT-STb2022 challenge.