Rethinking 3-D LiDAR Point Cloud Segmentation

IEEE Trans Neural Netw Learn Syst. 2021 Dec 16:PP. doi: 10.1109/TNNLS.2021.3132836. Online ahead of print.

Abstract

Many point-based semantic segmentation methods have been designed for indoor scenarios, but they struggle if they are applied to point clouds that are captured by a light detection and ranging (LiDAR) sensor in an outdoor environment. In order to make these methods more efficient and robust such that they can handle LiDAR data, we introduce the general concept of reformulating 3-D point-based operations such that they can operate in the projection space. While we show by means of three point-based methods that the reformulated versions are between 300 and 400 times faster and achieve higher accuracy, we furthermore demonstrate that the concept of reformulating 3-D point-based operations allows to design new architectures that unify the benefits of point-based and image-based methods. As an example, we introduce a network that integrates reformulated 3-D point-based operations into a 2-D encoder-decoder architecture that fuses the information from different 2-D scales. We evaluate the approach on four challenging datasets for semantic LiDAR point cloud segmentation and show that leveraging reformulated 3-D point-based operations with 2-D image-based operations achieves very good results for all four datasets.