Deep learning-based medical image segmentation with limited labels

Phys Med Biol. 2020 Nov 20;65(23):10.1088/1361-6560/abc363. doi: 10.1088/1361-6560/abc363.

Abstract

Deep learning (DL)-based auto-segmentation has the potential for accurate organ delineation in radiotherapy applications but requires large amounts of clean labeled data to train a robust model. However, annotating medical images is extremely time-consuming and requires clinical expertise, especially for segmentation that demands voxel-wise labels. On the other hand, medical images without annotations are abundant and highly accessible. To alleviate the influence of the limited number of clean labels, we propose a weakly supervised DL training approach using deformable image registration (DIR)-based annotations, leveraging the abundance of unlabeled data. We generate pseudo-contours by utilizing DIR to propagate atlas contours onto abundant unlabeled images and train a robust DL-based segmentation model. With 10 labeled TCIA dataset and 50 unlabeled CT scans from our institution, our model achieved Dice similarity coefficient of 87.9%, 73.4%, 73.4%, 63.2% and 61.0% on mandible, left & right parotid glands and left & right submandibular glands of TCIA test set and competitive performance on our institutional clinical dataset and a third party (PDDCA) dataset. Experimental results demonstrated the proposed method outperformed traditional multi-atlas DIR methods and fully supervised limited data training and is promising for DL-based medical image segmentation application with limited annotated data.

Keywords: deep learning; deformable image registration; limited labels; segmentation.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Deep Learning*
  • Image Processing, Computer-Assisted / methods
  • Neural Networks, Computer
  • Tomography, X-Ray Computed