GAC3D: improving monocular 3D object detection with ground-guide model and adaptive convolution

PeerJ Comput Sci. 2021 Oct 6:7:e686. doi: 10.7717/peerj-cs.686. eCollection 2021.

Abstract

Monocular 3D object detection has recently become prevalent in autonomous driving and navigation applications due to its cost-efficiency and easy-to-embed to existent vehicles. The most challenging task in monocular vision is to estimate a reliable object's location cause of the lack of depth information in RGB images. Many methods tackle this ill-posed problem by directly regressing the object's depth or take the depth map as a supplement input to enhance the model's results. However, the performance relies heavily on the estimated depth map quality, which is bias to the training data. In this work, we propose depth-adaptive convolution to replace the traditional 2D convolution to deal with the divergent context of the image's features. This lead to significant improvement in both training convergence and testing accuracy. Second, we propose a ground plane model that utilizes geometric constraints in the pose estimation process. With the new method, named GAC3D, we achieve better detection results. We demonstrate our approach on the KITTI 3D Object Detection benchmark, which outperforms existing monocular methods.

Keywords: 3D object detection; Adaptive convolution; Depth estimation; Ground-guide; Monocular; Pseudo-pose.

Associated data

  • figshare/10.6084/m9.figshare.15000432.v1

Grants and funding

This research is supported by Ho Chi Minh City University of Technology (HCMUT), VNU-HCM under grant number To-KHMT-2020-03. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.