Reconstruct as Far as You Can: Consensus of Non-Rigid Reconstruction from Feasible Regions

IEEE Trans Pattern Anal Mach Intell. 2021 Feb;43(2):623-637. doi: 10.1109/TPAMI.2019.2931317. Epub 2021 Jan 8.

Abstract

Much progress has been made for non-rigid structure from motion (NRSfM) during the last two decades, which made it possible to provide reasonable solutions for synthetically-created benchmark data. In order to utilize these NRSfM techniques in more realistic situations, however, we are now facing two important problems that must be solved: First, general scenes contain complex deformations as well as multiple objects, which violates the usual assumptions of previous NRSfM proposals. Second, there are many unreconstructable regions in the video, either because of the discontinued tracks of 2D trajectories or those regions static towards the camera, which require careful manipulations. In this paper, we show that a consensus-based reconstruction framework can handle these issues effectively. Even though the entire scene is complex, its parts usually have simpler deformations, and even though there are some unreconstructable parts, they can be weeded out to reduce their harmful effect on the entire reconstruction. The main difficulty of this approach lies in identifying appropriate parts, however, it can be effectively avoided by sampling parts stochastically and then aggregate their reconstructions afterwards. Experimental results show that the proposed method renews the state-of-the-art for popular benchmark data under much harsher environments, i.e., narrow camera view ranges, and it can reconstruct video-based real-world data effectively for as many areas as it can without an elaborated user input.