3D Object Detection for Self-Driving Cars Using Video and LiDAR: An Ablation Study

Sensors (Basel). 2023 Mar 17;23(6):3223. doi: 10.3390/s23063223.

Abstract

Methods based on 64-beam LiDAR can provide very precise 3D object detection. However, highly accurate LiDAR sensors are extremely costly: a 64-beam model can cost approximately USD 75,000. We previously proposed SLS-Fusion (sparse LiDAR and stereo fusion) to fuse low-cost four-beam LiDAR with stereo cameras that outperform most advanced stereo-LiDAR fusion methods. In this paper, and according to the number of LiDAR beams used, we analyzed how the stereo and LiDAR sensors contributed to the performance of the SLS-Fusion model for 3D object detection. Data coming from the stereo camera play a significant role in the fusion model. However, it is necessary to quantify this contribution and identify the variations in such a contribution with respect to the number of LiDAR beams used inside the model. Thus, to evaluate the roles of the parts of the SLS-Fusion network that represent LiDAR and stereo camera architectures, we propose dividing the model into two independent decoder networks. The results of this study show that-starting from four beams-increasing the number of LiDAR beams has no significant impact on the SLS-Fusion performance. The presented results can guide the design decisions by practitioners.

Keywords: 3D object detection; LiDAR; autonomous vehicle; fusion; stereo camera.

Grants and funding

This research received no external funding.