Sparse Optical Flow Implementation Using a Neural Network for Low-Resolution Thermal Aerial Imaging

J Imaging. 2022 Oct 12;8(10):279. doi: 10.3390/jimaging8100279.

Abstract

This study is inspired by the widely used algorithm for real-time optical flow, the sparse Lucas-Kanade, by applying a feature extractor to decrease the computational requirement of optical flow based neural networks from real-world thermal aerial imagery. Although deep-learning-based algorithms have achieved state-of-the-art accuracy and have outperformed most traditional techniques, most of them cannot be implemented on a small multi-rotor UAV due to size and weight constraints on the platform. This challenge comes from the high computational cost of these techniques, with implementations requiring an integrated graphics processing unit with a powerful on-board computer to run in real time, resulting in a larger payload and consequently shorter flight time. For navigation applications that only require a 2D optical flow vector, a dense flow field computed from a deep learning neural network contains redundant information. A feature extractor based on the Shi-Tomasi technique was used to extract only appropriate features from thermal images to compute optical flow. The state-of-the-art RAFT-s model was trained with a full image and with our proposed alternative input, showing a substantial increase in speed while maintain its accuracy in the presence of high thermal contrast where features could be detected.

Keywords: LWIR; UAVs; deep learning; navigation; optical flow; thermal imaging.

Grants and funding

This research was supported by an Australian Government Research Training Program (RTP) Scholarship.