Joint Sparse Representation and Robust Feature-Level Fusion for Multi-Cue Visual Tracking

IEEE Trans Image Process. 2015 Dec;24(12):5826-41. doi: 10.1109/TIP.2015.2481325. Epub 2015 Sep 23.

Abstract

Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.

Publication types

  • Research Support, Non-U.S. Gov't