Self-Supervised Monocular Depth Estimation With Multiscale Perception

IEEE Trans Image Process. 2022:31:3251-3266. doi: 10.1109/TIP.2022.3167307. Epub 2022 Apr 26.

Abstract

Extracting 3D information from a single optical image is very attractive. Recently emerging self-supervised methods can learn depth representations without using ground truth depth maps as training data by transforming the depth prediction task into an image synthesis task. However, existing methods rely on a differentiable bilinear sampler for image synthesis, which results in each pixel in a synthetic image being derived from only four pixels in the source image and causes each pixel in the depth map to perceive only a few pixels in the source image. In addition, when calculating the photometric error between a synthetic image and its corresponding target image, existing methods only consider the photometric error within a small neighborhood of each single pixel and therefore ignore correlations between larger areas, which causes the model to tend to fall into the local optima for small patches. In order to extend the perceptual area of the depth map over the source image, we propose a novel multi-scale method that downsamples the predicted depth map and performs image synthesis at different resolutions, which enables each pixel in the depth map to perceive more pixels in the source image and improves the performance of the model. As for the locality of photometric error, we propose a structural similarity (SSIM) pyramid loss to allow the model to sense the difference between images in multiple areas of different sizes. Experimental results show that our method achieves superior performance on both outdoor and indoor benchmarks.