HVIOnet: A deep learning based hybrid visual-inertial odometry approach for unmanned aerial system position estimation

Neural Netw. 2022 Nov:155:461-474. doi: 10.1016/j.neunet.2022.09.001. Epub 2022 Sep 7.

Abstract

Sensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual-Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation.

Keywords: BiLSTM; IMU; RNN; ROS; UAS; VIO.

MeSH terms

  • Deep Learning*
  • Memory, Long-Term
  • Neural Networks, Computer
  • Reactive Oxygen Species
  • Robotics*

Substances

  • Reactive Oxygen Species