Domain Adaptation for Underwater Image Enhancement

IEEE Trans Image Process. 2023 Feb 17:PP. doi: 10.1109/TIP.2023.3244647. Online ahead of print.

Abstract

Recently, learning-based algorithms have shown impressive performance in underwater image enhancement. Most of them resort to training on synthetic data and obtain outstanding performance. However, these deep methods ignore the significant domain gap between the synthetic and real data (i.e., inter-domain gap), and thus the models trained on synthetic data often fail to generalize well to real-world underwater scenarios. Moreover, the complex and changeable underwater environment also causes a great distribution gap among the real data itself (i.e., intra-domain gap). However, almost no research focuses on this problem and thus their techniques often produce visually unpleasing artifacts and color distortions on various real images. Motivated by these observations, we propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to simultaneously minimize the inter-domain and intra-domain gap. Concretely, in the first phase, a new triple-alignment network is designed, including a translation part for enhancing realism of input images, followed by a task-oriented enhancement part. With performing image-level, feature-level and output-level adaptation in these two parts through jointly adversarial learning, the network can better build invariance across domains and thus bridging the inter-domain gap. In the second phase, an easy-hard classification of real data according to the assessed quality of enhanced images is performed, in which a new rank-based underwater quality assessment method is embedded. By leveraging implicit quality information learned from rankings, this method can more accurately assess the perceptual quality of enhanced images. Using pseudo labels from the easy part, an easy-hard adaptation technique is then conducted to effectively decrease the intra-domain gap between easy and hard samples. Extensive experimental results demonstrate that the proposed TUDA is significantly superior to existing works in terms of both visual quality and quantitative metrics.