Deep Learning-Aided Inertial/Visual/LiDAR Integration for GNSS-Challenging Environments

Sensors (Basel). 2023 Jun 29;23(13):6019. doi: 10.3390/s23136019.

Abstract

This research develops an integrated navigation system, which fuses the measurements of the inertial measurement unit (IMU), LiDAR, and monocular camera using an extended Kalman filter (EKF) to provide accurate positioning during prolonged GNSS signal outages. The system features the use of an integrated INS/monocular visual simultaneous localization and mapping (SLAM) navigation system that takes advantage of LiDAR depth measurements to correct the scale ambiguity that results from monocular visual odometry. The proposed system was tested using two datasets, namely, the KITTI and the Leddar PixSet, which cover a wide range of driving environments. The system yielded an average reduction in the root-mean-square error (RMSE) of about 80% and 92% in the horizontal and upward directions, respectively. The proposed system was compared with an INS/monocular visual SLAM/LiDAR SLAM integration and to some state-of-the-art SLAM algorithms.

Keywords: GNSS-denied environments; INS/LIMO; INS/LIMO/LiDAR; integrated navigation system.

MeSH terms

  • Algorithms
  • Deep Learning*

Grants and funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), TMU Graduate Fellowship (TMU|GF), and the Government of Ontario Scholarship (OGS).