Conception of a High-Level Perception and Localization System for Autonomous Driving

Sensors (Basel). 2022 Dec 9;22(24):9661. doi: 10.3390/s22249661.

Abstract

This paper describes the conception of a high level, compact, scalable, and long autonomy perception and localization system for autonomous driving applications. Our benchmark is composed of a high resolution lidar (128 channels), a stereo global shutter camera, an inertial navigation system, a time server, and an embedded computer. In addition, in order to acquire data and build multi-modal datasets, this system embeds two perception algorithms (RBNN detection, DCNN detection) and one localization algorithm (lidar-based localization) to provide real-time advanced information such as object detection and localization in challenging environments (lack of GPS). In order to train and evaluate the perception algorithms, a dataset is built from 10,000 annotated lidar frames from various drives carried out under different weather conditions and different traffic and population densities. The performances of the three algorithms are competitive with the state-of-the-art. Moreover, the processing time of these algorithms are compatible with real-time autonomous driving applications. By providing directly accurate advanced outputs, this system might significantly facilitate the work of researchers and engineers with respect to planning and control modules. Thus, this study intends to contribute to democratizing access to autonomous vehicle research platforms.

Keywords: autonomous vehicles; clustering; deep learning; localization; mapping; perception; self-driving cars.

MeSH terms

  • Algorithms
  • Automobile Driving*
  • Autonomous Vehicles
  • Fertilization*
  • Perception