Self-Supervised Arbitrary-Scale Implicit Point Clouds Upsampling

IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12394-12407. doi: 10.1109/TPAMI.2023.3287628. Epub 2023 Sep 5.

Abstract

Point clouds upsampling (PCU), which aims to generate dense and uniform point clouds from the captured sparse input of 3D sensor such as LiDAR, is a practical yet challenging task. It has potential applications in many real-world scenarios, such as autonomous driving, robotics, AR/VR, etc. Deep neural network based methods achieve remarkable success in PCU. However, most existing deep PCU methods either take the end-to-end supervised training, where large amounts of pairs of sparse input and dense ground-truth are required to serve as the supervision; or treat up-scaling of different factors as independent tasks, where multiple networks are required for different scaling factors, leading to significantly increased model complexity and training time. In this article, we propose a novel method that achieves self-supervised and magnification-flexible PCU simultaneously. No longer explicitly learning the mapping between sparse and dense point clouds, we formulate PCU as the task of seeking nearest projection points on the implicit surface for seed points. We then define two implicit neural functions to estimate projection direction and distance respectively, which can be trained by the pretext learning tasks. Moreover, the projection rectification strategy is tailored to remove outliers so as to keep the shape of object clear and sharp. Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than state-of-the-art supervised methods.