Accurate drone corner position estimation in complex backgrounds with boundary classification

Heliyon. 2024 Mar 26;10(7):e28111. doi: 10.1016/j.heliyon.2024.e28111. eCollection 2024 Apr 15.

Abstract

This study develops an efficient approach for precise channel frame detection in complex backgrounds, addressing the critical need for accurate drone navigation. Leveraging YOLACT and group regression, our method outperforms conventional techniques that rely solely on color information. We conducted extensive experiments involving channel frames placed at various angles and within intricate backgrounds, training the algorithm to effectively recognize them. The process involves initial edge image detection, noise reduction through binarization and erosion, segmentation of channel frame line segments using the Hough Transform algorithm, and subsequent classification via the K-means algorithm. Ultimately, we obtain the regression line segment through linear regression, enabling precise positioning by identifying intersection points. Experimental validations validate the robustness of our approach across diverse angles and challenging backgrounds, making significant advancements in UAV applications.

Keywords: Boundary classification; Channel frame detection; Deep learning; Object segmentation; YOLACT.