Neutral Cross-Entropy Loss Based Unsupervised Domain Adaptation for Semantic Segmentation

IEEE Trans Image Process. 2021:30:4516-4525. doi: 10.1109/TIP.2021.3073285. Epub 2021 Apr 27.

Abstract

The generalization performance for semantic segmentation remains a major challenge when the data distributions between the source and target domain mismatch. Unsupervised domain adaptation (UDA) approaches are proposed to mitigate the problem above, among which entropy-minimization-based methods have gained more and more attention. However, the methods merely follow the cluster assumption sharpening the prediction distribution, thus have limited performance improvement. Without additional priors, the entropy loss can easily over-sharpen the prediction distribution, which brings noisy information into the learning process. On the other hand, the gradient of the entropy loss is strongly biased toward easy samples, also leading to limited generalization advances. In this paper, we firstly propose a pixel-level consistency regularization method, which introduces the smoothness prior to the UDA problem. Furthermore, we propose the neutral cross-entropy loss based on the consistency regularization, and reveal that its internal neutralization mechanism mitigates the over-sharpness of entropy minimization via the flatness effect of consistency regularization. We also demonstrate that the gradient bias toward easy samples is inherently tackled via the neutral cross-entropy loss. The experiments show that the proposed method has outperformed state-of-the-art methods in two synthetic-to-real experiments, only using the lightweight network.