Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality

Sensors (Basel). 2022 Feb 13;22(4):1425. doi: 10.3390/s22041425.

Abstract

The ability of artificial intelligence to drive toward an intended destination is a key component of an autonomous vehicle. Different paradigms are now being employed to address artificial intelligence advancement. On the one hand, modular pipelines break down the driving model into submodels, such as perception, maneuver planning and control. On the other hand, we used the end-to-end driving method to assign raw sensor data directly to vehicle control signals. The latter is less well-studied but is becoming more popular since it is easier to use. This article focuses on end-to-end autonomous driving, using RGB pictures as the primary sensor input data. The autonomous vehicle is equipped with a camera and active sensors, such as LiDAR and Radar, for safe navigation. Active sensors (e.g., LiDAR) provide more accurate depth information than passive sensors. As a result, this paper examines whether combining the RGB from the camera and active depth information from LiDAR has better results in end-to-end artificial driving than using only a single modality. This paper focuses on the early fusion of multi-modality and demonstrates how it outperforms a single modality using the CARLA simulator.

Keywords: CARLA; artificial intelligent; conditional early fusion (CEF); conditional imitation learning (CIL); end-to-end autonomous driving; object detection; safely navigation; situation understanding.

MeSH terms

  • Algorithms*
  • Artificial Intelligence
  • Automobile Driving*
  • Radar
  • Research Design