Weakly Supervised Biomedical Image Segmentation by Reiterative Learning

IEEE J Biomed Health Inform. 2019 May;23(3):1205-1214. doi: 10.1109/JBHI.2018.2850040. Epub 2018 Jun 25.

Abstract

Recent advances in deep learning have produced encouraging results for biomedical image segmentation; however, outcomes rely heavily on comprehensive annotation. In this paper, we propose a neural network architecture and a new algorithm, known as overlapped region forecast, for the automatic segmentation of gastric cancer images. To the best of our knowledge, this report for the first time describes that deep learning has been applied to the segmentation of gastric cancer images. Moreover, a reiterative learning framework that achieves superior performance without pretraining or further manual annotation is presented to train a simple network on weakly annotated biomedical images. We customize the loss function to make the model converge faster while avoiding becoming trapped in local minima. Patch boundary errors were eliminated by our overlapped region forecast algorithm. By studying the characteristics of the model trained using two different patch extraction methods, we train iteratively and integrate predictions and weak annotations to improve the quality of the training data. Using these methods, a mean Intersection over Union coefficient of 0.883 and a mean accuracy of 91.09% were achieved on the partially labeled dataset, thereby securing a win in the 2017 China Big Data and Artificial Intelligence Innovation and Entrepreneurship Competition.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Histological Techniques
  • Humans
  • Image Interpretation, Computer-Assisted / methods*
  • Neural Networks, Computer
  • Stomach Neoplasms / diagnostic imaging
  • Supervised Machine Learning*