Real-time scene classification of unmanned aerial vehicles remote sensing image based on Modified GhostNet

PLoS One. 2023 Jun 7;18(6):e0286873. doi: 10.1371/journal.pone.0286873. eCollection 2023.

Abstract

Unmanned Aerial Vehicles (UAVs) play an important role in remote sensing image classification because they are capable of autonomously monitoring specific areas and analyzing images. The embedded platform and deep learning are used to classify UAV images in real-time. However, given the limited memory and computational resources, deploying deep learning networks on embedded devices and real-time analysis of ground scenes still has challenges in actual applications. To balance computational cost and classification accuracy, a novel lightweight network based on the original GhostNet is presented. The computational cost of this network is reduced by changing the number of convolutional layers. Meanwhile, the fully connected layer at the end is replaced with the fully convolutional layer. To evaluate the performance of the Modified GhostNet in remote sensing scene classification, experiments are performed on three public datasets: UCMerced, AID, and NWPU-RESISC. Compared with the basic GhostNet, the Floating Point Operations (FLOPs) are reduced from 7.85 MFLOPs to 2.58 MFLOPs, the memory is reduced from 16.40 MB to 5.70 MB, and the predicted time is improved by 18.86%. Our modified GhostNet also increases the average accuracy (Acc) (4.70% in AID experiments, 3.39% in UCMerced experiments). These results indicate that our Modified GhostNet can improve the performance of lightweight networks for scene classification and effectively enable real-time monitoring of ground scenes.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Remote Sensing Technology* / methods
  • Unmanned Aerial Devices*

Grants and funding

Funding: This research is supported by National Natural Science Foundation of China (Grant No. 42001393, 41501370 and 62176165), the 5th College-enterprise Cooperation Project of Shenzhen Technology University (Grant No. 2021010802014), Shenzhen Science and Technology Program (Grant No. JCYJ20220530152817039), and Guangdong Science and Technology Strategic Innovation Fund (the Guangdong–Hong Kong-Macau Joint Laboratory Program, Grant No. 2020B1212030009). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.