A Novel Approach to Pod Count Estimation Using a Depth Camera in Support of Soybean Breeding Applications

Sensors (Basel). 2023 Jul 18;23(14):6506. doi: 10.3390/s23146506.

Abstract

Improving soybean (Glycine max L. (Merr.)) yield is crucial for strengthening national food security. Predicting soybean yield is essential to maximize the potential of crop varieties. Non-destructive methods are needed to estimate yield before crop maturity. Various approaches, including the pod-count method, have been used to predict soybean yield, but they often face issues with the crop background color. To address this challenge, we explored the application of a depth camera to real-time filtering of RGB images, aiming to enhance the performance of the pod-counting classification model. Additionally, this study aimed to compare object detection models (YOLOV7 and YOLOv7-E6E) and select the most suitable deep learning (DL) model for counting soybean pods. After identifying the best architecture, we conducted a comparative analysis of the model's performance by training the DL model with and without background removal from images. Results demonstrated that removing the background using a depth camera improved YOLOv7's pod detection performance by 10.2% precision, 16.4% recall, 13.8% mAP@50, and 17.7% mAP@0.5:0.95 score compared to when the background was present. Using a depth camera and the YOLOv7 algorithm for pod detection and counting yielded a mAP@0.5 of 93.4% and mAP@0.5:0.95 of 83.9%. These results indicated a significant improvement in the DL model's performance when the background was segmented, and a reasonably larger dataset was used to train YOLOv7.

Keywords: background segmentation; computer vision; deep learning; depth camera; high throughput phenotyping; machine vision; soybean pod-counting.

MeSH terms

  • Glycine max*
  • Plant Breeding*