Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation

Comput Biol Med. 2023 Sep:163:107096. doi: 10.1016/j.compbiomed.2023.107096. Epub 2023 Jun 1.

Abstract

Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human-machine collaboration.

Keywords: Deep learning; Segmentation; Uncertainty quantification.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Calibration
  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted
  • Probability
  • Uncertainty