A Resilient Method for Visual-Inertial Fusion Based on Covariance Tuning

Sensors (Basel). 2022 Dec 14;22(24):9836. doi: 10.3390/s22249836.

Abstract

To improve localization and pose precision of visual-inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.

Keywords: covariance tuning; nonlinear optimization; resilient sensor fusion; simultaneous localization and mapping; visual–inertial fusion.

MeSH terms

  • Algorithms*

Grants and funding

The research described above was supported by the foundation of National Key Laboratory of Electromagnetic Environment (Grant No.6142403210201).