Crowdsourcing of Histological Image Labeling and Object Delineation by Medical Students

IEEE Trans Med Imaging. 2019 May;38(5):1284-1294. doi: 10.1109/TMI.2018.2883237. Epub 2018 Nov 26.

Abstract

Crowdsourcing in pathology has been performed on tasks that are assumed to be manageable by nonexperts. Demand remains high for annotations of more complex elements in digital microscopic images, such as anatomical structures. Therefore, this paper investigates conditions to enable crowdsourced annotations of high-level image objects, a complex task considered to require expert knowledge. Seventy six medical students without specific domain knowledge who voluntarily participated in three experiments solved two relevant annotation tasks on histopathological images: 1) labeling of images showing tissue regions and 2) delineation of morphologically defined image objects. We focus on methods to ensure sufficient annotation quality including several tests on the required number of participants and on the correlation of participants' performance between tasks. In a set up simulating annotation of images with limited ground truth, we validated the feasibility of a confidence score using full ground truth. For this, we computed a majority vote using weighting factors based on individual assessment of contributors against scattered gold standard annotated by pathologists. In conclusion, we provide guidance for task design and quality control to enable a crowdsourced approach to obtain accurate annotations required in the era of digital pathology.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Crowdsourcing / methods*
  • Decision Making / physiology
  • Feasibility Studies
  • Histocytochemistry* / classification
  • Histocytochemistry* / methods
  • Humans
  • Image Processing, Computer-Assisted
  • Reproducibility of Results
  • Students, Medical*