GNViT- An enhanced image-based groundnut pest classification using Vision Transformer (ViT) model

PLoS One. 2024 Mar 25;19(3):e0301174. doi: 10.1371/journal.pone.0301174. eCollection 2024.

Abstract

Crop losses caused by diseases and pests present substantial challenges to global agriculture, with groundnut crops particularly vulnerable to their detrimental effects. This study introduces the Groundnut Vision Transformer (GNViT) model, a novel approach that harnesses a pre-trained Vision Transformer (ViT) on the ImageNet dataset. The primary goal is to detect and classify various pests affecting groundnut crops. Rigorous training and evaluation were conducted using a comprehensive dataset from IP102, encompassing pests such as Thrips, Aphids, Armyworms, and Wireworms. The GNViT model's effectiveness was assessed using reliability metrics, including the F1-score, recall, and overall accuracy. Data augmentation with GNViT resulted in a significant increase in training accuracy, achieving 99.52%. Comparative analysis highlighted the GNViT model's superior performance, particularly in accuracy, compared to state-of-the-art methodologies. These findings underscore the potential of deep learning models, such as GNViT, in providing reliable pest classification solutions for groundnut crops. The deployment of advanced technological solutions brings us closer to the overarching goal of reducing crop losses and enhancing global food security for the growing population.

MeSH terms

  • Agriculture*
  • Animals
  • Aphids*
  • Benchmarking
  • Crops, Agricultural
  • Reproducibility of Results

Grants and funding

The author(s) received no specific funding for this work.