Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation

Neural Netw. 2023 Sep:166:722-737. doi: 10.1016/j.neunet.2023.07.049. Epub 2023 Aug 3.

Abstract

Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases.

Keywords: Debiasing; Self-similarity; Texture co-occurrence; Unpaired image translation; Unsupervised domain adaptation.