Convolutional Regression for Visual Tracking

IEEE Trans Image Process. 2018 Mar 28. doi: 10.1109/TIP.2018.2819362. Online ahead of print.

Abstract

Recently, discriminatively learned correlation filters (DCF) has attracted much attention in visual object tracking community. The success of DCF is potentially attributed to the fact that a large number of samples are utilized to train the ridge regression model and predict the location of an object. To solve the regression problem in an efficient way, these samples are all generated by circularly shifting from a searching patch. However, these synthetic samples also induce some negative effects which weaken the robustness of DCF based trackers. In this paper, we propose a new approach to learn the regression model for visual tracking with single convolutional layer. Instead of learning the linear regression model in a closed form, we try to solve the regression problem by optimizing a onechannel- output convolution layer with gradient descent (GD). In particular, the kernel size of the convolution layer is set to the size of the object. Contrary to DCF, it is possible to incorporate all "real" samples clipped from the whole image. A critical issue of the GD approach is that most of the convolutional samples are negative and the contribution of positive samples will be suppressed. To address this problem, we propose a novel objective function to eliminate easy negatives and enhance positives. We perform extensive experiments on four widely-used datasets: OTB-100, OTB-50, TempleColor, and VOT-2016. The results show that the proposed algorithm achieves outstanding performance and outperforms most of the existing DCF based algorithms.