Panoptic blind image inpainting

ISA Trans. 2023 Jan:132:208-221. doi: 10.1016/j.isatra.2022.10.030. Epub 2022 Nov 1.

Abstract

In autonomous driving, scene understanding is a critical task in recognizing the driving environment or dangerous situations. Here, a variety of factors, including foreign objects on the lens, cloudy weather, and light blur, often reduce the accuracy of scene recognition. In this paper, we propose a new blind image inpainting model that accurately reconstructs images in a real environment where there is no ground truth for restoration. To this end, we first introduce a panoptic map to represent content information in detail and design an encoder-decoder structure to predict the panoptic map and the corrupted region mask. Then, we construct an image inpainting model that utilizes the information of the predicted map. Lastly, we present a mask refinement process to improve the accuracy of map prediction. To evaluate the effectiveness of the proposed model, we compared the restoration results of various inpainting methods on the cityscapes and coco datasets. Experimental results show that the proposed model outperforms other blind image inpainting models in terms of L1/L2 losses, PSNR and SSIM, and achieves similar performance to other image inpainting techniques that utilize additional information.

Keywords: Blind image inpainting; Contextual information; Generative Adversarial Networks; Image restoration; Panoptic segmentation.