Inside out: transforming images of lab-grown plants for machine learning applications in agriculture

Front Artif Intell. 2023 Jul 6:6:1200977. doi: 10.3389/frai.2023.1200977. eCollection 2023.

Abstract

Introduction: Machine learning tasks often require a significant amount of training data for the resultant network to perform suitably for a given problem in any domain. In agriculture, dataset sizes are further limited by phenotypical differences between two plants of the same genotype, often as a result of different growing conditions. Synthetically-augmented datasets have shown promise in improving existing models when real data is not available.

Methods: In this paper, we employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) and simple image processing techniques to translate indoor plant images to appear as field images. While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images.

Results: Furthermore, we use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection and measure the accuracy of the model on real field data images.

Discussion: The inclusion of training data generated by the CUT-GAN leads to better plant detection performance compared to a network trained solely on real data.

Keywords: agriculture 4.0; convolutional neural networks; data augmentation; deep learning; digital agriculture; generative adversarial networks; image augmentation.

Grants and funding

This work was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant program (Nos. RGPIN-2018-04088 and RGPIN-2020-06191), Compute Canada (now Digital Research Alliance of Canada) Resources for Research Groups competition (No. 1679), Western Economic Diversification Canada (No. 15453), and the Mitacs Accelerate Grant program (No. IT14120).