Segmentation only uses sparse annotations: Unified weakly and semi-supervised learning in medical images

Med Image Anal. 2022 Aug:80:102515. doi: 10.1016/j.media.2022.102515. Epub 2022 Jun 17.

Abstract

Since segmentation labeling is usually time-consuming and annotating medical images requires professional expertise, it is laborious to obtain a large-scale, high-quality annotated segmentation dataset. We propose a novel weakly- and semi-supervised framework named SOUSA (Segmentation Only Uses Sparse Annotations), aiming at learning from a small set of sparse annotated data and a large amount of unlabeled data. The proposed framework contains a teacher model and a student model. The student model is weakly supervised by scribbles and a Geodesic distance map derived from scribbles. Meanwhile, a large amount of unlabeled data with various perturbations are fed to student and teacher models. The consistency of their output predictions is imposed by Mean Square Error (MSE) loss and a carefully designed Multi-angle Projection Reconstruction (MPR) loss. Extensive experiments are conducted to demonstrate the robustness and generalization ability of our proposed method. Results show that our method outperforms weakly- and semi-supervised state-of-the-art methods on multiple datasets. Furthermore, our method achieves a competitive performance with some fully supervised methods with dense annotation when the size of the dataset is limited.

Keywords: Medical image; Semantic segmentation; Semi-supervised learning; Weakly supervised learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Supervised Machine Learning