Learning rate of students detecting and annotating pediatric wrist fractures in supervised artificial intelligence dataset preparations

PLoS One. 2022 Oct 20;17(10):e0276503. doi: 10.1371/journal.pone.0276503. eCollection 2022.

Abstract

The use of artificial intelligence (AI) in image analysis is an intensively debated topic in the radiology community these days. AI computer vision algorithms typically rely on large-scale image databases, annotated by specialists. Developing and maintaining them is time-consuming, thus, the involvement of non-experts into the workflow of annotation should be considered. We assessed the learning rate of inexperienced evaluators regarding correct labeling of pediatric wrist fractures on digital radiographs. Students with and without a medical background labeled wrist fractures with bounding boxes in 7,000 radiographs over ten days. Pediatric radiologists regularly discussed their mistakes. We found F1 scores-as a measure for detection rate-to increase substantially under specialist feedback (mean 0.61±0.19 at day 1 to 0.97±0.02 at day 10, p<0.001), but not the Intersection over Union as a parameter for labeling precision (mean 0.27±0.29 at day 1 to 0.53±0.25 at day 10, p<0.001). The times needed to correct the students decreased significantly (mean 22.7±6.3 seconds per image at day 1 to 8.9±1.2 seconds at day 10, p<0.001) and were substantially lower as annotated by the radiologists alone. In conclusion our data showed, that the involvement of undergraduated students into annotation of pediatric wrist radiographs enables a substantial time saving for specialists, therefore, it should be considered.

MeSH terms

  • Artificial Intelligence
  • Child
  • Fractures, Bone* / diagnostic imaging
  • Humans
  • Radiologists
  • Radiology* / methods
  • Students
  • Wrist / diagnostic imaging

Grants and funding

The author(s) received no specific funding for this work.