Human selection bias drives the linear nature of the more ground truth effect in explainable deep learning optical coherence tomography image segmentation

J Biophotonics. 2024 Feb;17(2):e202300274. doi: 10.1002/jbio.202300274. Epub 2023 Nov 8.

Abstract

Supervised deep learning (DL) algorithms are highly dependent on training data for which human graders are assigned, for example, for optical coherence tomography (OCT) image annotation. Despite the tremendous success of DL, due to human judgment, these ground truth labels can be inaccurate and/or ambiguous and cause a human selection bias. We therefore investigated the impact of the size of the ground truth and variable numbers of graders on the predictive performance of the same DL architecture and repeated each experiment three times. The largest training dataset delivered a prediction performance close to that of human experts. All DL systems utilized were highly consistent. Nevertheless, the DL under-performers could not achieve any further autonomous improvement even after repeated training. Furthermore, a quantifiable linear relationship between ground truth ambiguity and the beneficial effect of having a larger amount of ground truth data was detected and marked as the more-ground-truth effect.

Keywords: explainable AI; machine learning; optical coherence tomography; retina.

MeSH terms

  • Algorithms
  • Deep Learning*
  • Humans
  • Selection Bias
  • Tomography, Optical Coherence / methods