Vision-based safe autonomous UAV docking with panoramic sensors

Front Robot AI. 2023 Nov 23:10:1223157. doi: 10.3389/frobt.2023.1223157. eCollection 2023.

Abstract

The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked concerns about safety measures during their missions. To advance towards safer autonomous aerial robots, this work presents a vision-based solution to ensuring safe autonomous UAV landings with minimal infrastructure. During docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper, we propose the use of a single omnidirectional panoramic camera pointing upwards from a landing pad to detect and estimate the position of people around the landing area. The images are processed in real-time in an embedded computer, which communicates with the onboard computer of approaching UAVs to transition between landing, hovering or emergency landing states. While landing, the ground camera also aids in finding an optimal position, which can be required in case of low-battery or when hovering is no longer possible. We use a YOLOv7-based object detection model and a XGBooxt model for localizing nearby people, and the open-source ROS and PX4 frameworks for communication, interfacing, and control of the UAV. We present both simulation and real-world indoor experimental results to show the efficiency of our methods.

Keywords: deep learning; object detection; panoramic camera; safe landing; unmanned aerial vehicle (UAV); vision-based localization.

Grants and funding

This research work has been supported by the Academy of Finland’s AeroPolis project (Grant No. 348480).