Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images

Med Image Comput Comput Assist Interv. 2014;17(Pt 2):438-45. doi: 10.1007/978-3-319-10470-6_55.

Abstract

Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Crowdsourcing / instrumentation*
  • Crowdsourcing / methods*
  • Equipment Design
  • Equipment Failure Analysis
  • Humans
  • Image Enhancement / instrumentation
  • Image Enhancement / methods
  • Information Storage and Retrieval / methods*
  • Laparoscopes*
  • Laparoscopy / methods*
  • Observer Variation
  • Pattern Recognition, Automated / methods*
  • Reproducibility of Results
  • Sensitivity and Specificity