Context Adaptive Network for Image Inpainting

IEEE Trans Image Process. 2023:32:6332-6345. doi: 10.1109/TIP.2023.3298560. Epub 2023 Nov 20.

Abstract

In a typical image inpainting task, the location and shape of the damaged or masked area is often random and irregular. The vanilla convolutions widely used in learning-based inpainting models treat all spatial features as valid and share parameters across regions, making it difficult for them to cope with those irregular damages, and models tend to produce inpainting results with color discrepancy and blurriness. In this paper, we propose a novel Context Adaptive Network (CANet) to address this issue. The main idea of the proposed CANet is able to generate different weights depending on the miscellaneous input, which may help to complement images with multiple broken forms in a flexible way. Specifically, the proposed CANet has two novel context adaptive modules, namely, the context adaptive block (CAB) and the cross-scale contextual attention (CSCA), which utilize attention mechanisms to cope with diverse content breakdowns. The proposed CAB, during the forward propagation, uses an adaptive term to determine the importance between adaptive term and convolution kernel, so as to dynamically balance features based on the degree of breakage (confidence level or soft mask), and the overall calculation is formulated as a classic convolution implementation with an additional attention term to describe local structure. Besides, the proposed CSCA, not only takes advantage of the contextual attention module, but also considers cross-scale information transfer to generate reasonable features for damaged areas, thus alleviating the inefficiency of the long-range modeling capability of convolutional neural networks. Qualitative and quantitative experiments show that our method performs better than state-of-the-arts, producing clearer, more coherent and visually plausible inpainting results. The code can be found at github.com/dengyecode/CANet_image_inpainting.