Vehicle Localization Using 3D Building Models and Point Cloud Matching

Sensors (Basel). 2021 Aug 9;21(16):5356. doi: 10.3390/s21165356.

Abstract

Detecting buildings in the surroundings of an urban vehicle and matching them to building models available on map services is an emerging trend in robotics localization for urban vehicles. In this paper, we present a novel technique, which improves a previous work by detecting building façade, their positions, and finding the correspondences with their 3D models, available in OpenStreetMap. The proposed technique uses segmented point clouds produced using stereo images, processed by a convolutional neural network. The point clouds of the façades are then matched against a reference point cloud, produced extruding the buildings' outlines, which are available on OpenStreetMap (OSM). In order to produce a lane-level localization of the vehicle, the resulting information is then fed into our probabilistic framework, called Road Layout Estimation (RLE). We prove the effectiveness of this proposal, testing it on sequences from the well-known KITTI dataset and comparing the results concerning a basic RLE version without the proposed pipeline.

Keywords: autonomous vehicle; point cloud processing; robot perception; urban vehicle localization.

MeSH terms

  • Neural Networks, Computer*
  • Robotics*