Globally and Locally Semantic Colorization via Exemplar-Based Broad-GAN

IEEE Trans Image Process. 2021:30:8526-8539. doi: 10.1109/TIP.2021.3117061. Epub 2021 Oct 13.

Abstract

Given a target grayscale image and a reference color image, exemplar-based image colorization aims to generate a visually natural-looking color image by transforming meaningful color information from the reference image to the target image. It remains a challenging problem due to the differences in semantic content between the target image and the reference image. In this paper, we present a novel globally and locally semantic colorization method called exemplar-based conditional broad-GAN, a broad generative adversarial network (GAN) framework, to deal with this limitation. Our colorization framework is composed of two sub-networks: the match sub-net and the colorization sub-net. We reconstruct the target image with a dictionary-based sparse representation in the match sub-net, where the dictionary consists of features extracted from the reference image. To enforce global-semantic and local-structure self-similarity constraints, global-local affinity energy is explored to constrain the sparse representation for matching consistency. Then, the matching information of the match sub-net is fed into the colorization sub-net as the perceptual information of the conditional broad-GAN to facilitate the personalized results. Finally, inspired by the observation that a broad learning system is able to extract semantic features efficiently, we further introduce a broad learning system into the conditional GAN and propose a novel loss, which substantially improves the training stability and the semantic similarity between the target image and the ground truth. Extensive experiments have shown that our colorization approach outperforms the state-of-the-art methods, both perceptually and semantically.