An Unsupervised Monocular Visual Odometry Based on Multi-Scale Modeling

Sensors (Basel). 2022 Jul 11;22(14):5193. doi: 10.3390/s22145193.

Abstract

Unsupervised deep learning methods have shown great success in jointly estimating camera pose and depth from monocular videos. However, previous methods mostly ignore the importance of multi-scale information, which is crucial for pose estimation and depth estimation, especially when the motion pattern is changed. This article proposes an unsupervised framework for monocular visual odometry (VO) that can model multi-scale information. The proposed method utilizes densely linked atrous convolutions to increase the receptive field size without losing image information, and adopts a non-local self-attention mechanism to effectively model the long-range dependency. Both of them can model objects of different scales in the image, thereby improving the accuracy of VO, especially in rotating scenes. Extensive experiments on the KITTI dataset have shown that our approach is competitive with other state-of-the-art unsupervised learning-based monocular methods and is comparable to supervised or model-based methods. In particular, we have achieved state-of-the-art results on rotation estimation.

Keywords: V-SLAM; unsupervised learning; visual odometry.

Grants and funding

This research was funded by National Natural Science Foundation of China (Grant No. 61976173), MoE-CMCC Artificial Intelligence Project (Grant MCM20190701), National Key Research and Development Program of China (Grant 2018AAA0102201), Development Program of Shaanxi (Grant 2020GY-002).