Meta-Knowledge Learning and Domain Adaptation for Unseen Background Subtraction

IEEE Trans Image Process. 2021:30:9058-9068. doi: 10.1109/TIP.2021.3122102. Epub 2021 Nov 2.

Abstract

Background subtraction is a classic video processing task pervading in numerous visual applications such as video surveillance and traffic monitoring. Given the diversity and variability of real application scenes, an ideal background subtraction model should be robust to various scenarios. Even though deep-learning approaches have demonstrated unprecedented improvements, they often fail to generalize to unseen scenarios, thereby less suitable for extensive deployment. In this work, we propose to tackle cross-scene background subtraction via a two-phase framework that includes meta-knowledge learning and domain adaptation. Specifically, as we observe that meta-knowledge (i.e., scene-independent common knowledge) is the cornerstone for generalizing to unseen scenes, we draw on traditional frame differencing algorithms and design a deep difference network (DDN) to encode meta-knowledge especially temporal change knowledge from various cross-scene data (source domain) without intermittent foreground motion pattern. In addition, we explore a self-training domain adaptation strategy based on iterative evolution. With iteratively updated pseudo-labels, the DDN is continuously fine-tuned and evolves progressively toward unseen scenes (target domain) in an unsupervised fashion. Our framework could be easily deployed on unseen scenes without relying on their annotations. As evidenced by our experiments on the CDnet2014 dataset, it brings a significant improvement to background subtraction. Our method has a favorable processing speed (70 fps) and outperforms the best unsupervised algorithm and top supervised algorithm designed for unseen scenes by 9% and 3%, respectively.