Learning the Synthesizability of Dynamic Texture Samples

IEEE Trans Image Process. 2018 Dec 14. doi: 10.1109/TIP.2018.2886807. Online ahead of print.

Abstract

Exemplar-based dynamic texture synthesis (EDTS) is targeted to generate new samples of high quality that are perceptually similar to a given input dynamic texture exemplar. This paper addresses the issue of learning the synthesizability of dynamic texture samples. Given a dynamic texture sample, how is its possibility of being synthesized by EDTS methods estimated, and what is the most suitable EDTS algorithm to complete the task? To this end, we propose associating dynamic texture samples with synthesizability scores by learning regression models on a compiled dynamic texture dataset annotated in terms of synthesizability. More precisely, we first define the synthesizability of DT samples and characterize them by a set of spatiotemporal features. We then train regression models on the annotated dataset with feature representation to predict the synthesizability scores of the DT samples and learn classifiers to select the most suitable EDTS algorithm. We further complete the selection, partition and synthesizability prediction of the DT samples in a hierarchical scheme. The learned synthesizability is finally applied to detecting synthesizable regions in videos. Both quantitative and qualitative experiments demonstrate that our method can efficiently learn and predict the synthesizability of DT samples.