SIR: Self-supervised Image Rectification via Seeing the Same Scene from Multiple Different Lenses

IEEE Trans Image Process. 2023 Jan 9:PP. doi: 10.1109/TIP.2022.3231087. Online ahead of print.

Abstract

Deep learning has demonstrated its power in image rectification by leveraging the representation capacity of deep neural networks via supervised training based on a large-scale synthetic dataset. However, the model may overfit the synthetic images and generalize not well on real-world fisheye images due to the limited universality of a specific distortion model and the lack of explicitly modeling the distortion and rectification process. In this paper, we propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of a same scene from different lenses should be the same. Specifically, we devise a new network architecture with a shared encoder and several prediction heads, each of which predicts the distortion parameter of a specific distortion model. We further leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters and exploit the intra- and inter-model consistency between them during training, thereby leading to a self-supervised learning scheme without the need for ground-truth distortion parameters or normal images. Experiments on synthetic dataset and real-world fisheye images demonstrate that our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art (SOTA) methods. The proposed self-supervised method also provides a possible way to improve the universality of distortion models while keeping their self-consistency. Code and datasets will be available at https://github.com/loong8888/SIR.