Depth-Aware Unpaired Video Dehazing

IEEE Trans Image Process. 2024:33:2388-2403. doi: 10.1109/TIP.2024.3378472. Epub 2024 Mar 28.

Abstract

This paper investigates a novel unpaired video dehazing framework, which can be a good candidate in practice by relieving pressure from collecting paired data. In such a paradigm, two key issues including 1) temporal consistency uninvolved in single image dehazing, and 2) better dehazing ability need to be considered for satisfied performance. To handle the mentioned problems, we alternatively resort to introducing depth information to construct additional regularization and supervision. Specifically, we attempt to synthesize realistic motions with depth information to improve the effectiveness and applicability of traditional temporal losses, and thus better regularizing the spatiotemporal consistency. Moreover, the depth information is also considered in terms of adversarial learning. For haze removal, the depth information guides the local discriminator to focus on regions where haze residuals are more likely to exist. The dehazing performance is consequently improved by more pertinent guidance from our depth-aware local discriminator. Extensive experiments are conducted to validate our effectiveness and superiority over other competitors. To the best of our knowledge, this study is the initial foray into the task of unpaired video dehazing. Our code is available at https://github.com/YaN9-Y/DUVD.