Robust Depth Estimation using Auto-Exposure Bracketing

IEEE Trans Image Process. 2018 Dec 14. doi: 10.1109/TIP.2018.2886777. Online ahead of print.

Abstract

As the computing power of hand-held devices grows, there has been increasing interest in the capture of depth information, to enable a variety of photographic applications. However, under low-light conditions, most devices still suffer from low imaging quality and inaccurate depth acquisition. To address the problem, we present a robust depth estimation method from a short burst shot with varied intensity (i.e., Auto-exposure bracketing) and/or strong noise (i.e., High ISO). Our key idea synergistically combines deep convolutional neural networks with geometric understanding of the scene. We introduce a geometric transformation between optical flow and depth tailored for burst images, enabling our learning-based multi-view stereo matching to be performed effectively. We then describe our depth estimation pipeline that incorporates this geometric transformation into our residual-flow network. It allows our framework to produce an accurate depth map even with a bracketed image sequence. We demonstrate that our method outperforms state-of-the-art methods for various datasets captured by a smartphone and a DSLR camera. Moreover, we show that the estimated depth is applicable for image quality enhancement and photographic editing.