Test-time bi-directional adaptation between image and model for robust segmentation

Comput Methods Programs Biomed. 2023 May:233:107477. doi: 10.1016/j.cmpb.2023.107477. Epub 2023 Mar 14.

Abstract

Background and objective: Deep learning models often suffer from performance degradations when deployed in real clinical environments due to appearance shifts between training and testing images. Most extant methods use training-time adaptation, which almost require target domain samples in the training phase. However, these solutions are limited by the training process and cannot guarantee the accurate prediction of test samples with unforeseen appearance shifts. Further, it is impractical to collect target samples in advance. In this paper, we provide a general method of making existing segmentation models robust to samples with unknown appearance shifts when deployed in daily clinical practice.

Methods: Our proposed test-time bi-directional adaptation framework combines two complementary strategies. First, our image-to-model (I2M) adaptation strategy adapts appearance-agnostic test images to the learned segmentation model using a novel plug-and-play statistical alignment style transfer module during testing. Second, our model-to-image (M2I) adaptation strategy adapts the learned segmentation model to test images with unknown appearance shifts. This strategy applies an augmented self-supervised learning module to fine-tune the learned model with proxy labels that it generates. This innovative procedure can be adaptively constrained using our novel proxy consistency criterion. This complementary I2M and M2I framework demonstrably achieves robust segmentation against unknown appearance shifts using existing deep-learning models.

Results: Extensive experiments on 10 datasets containing fetal ultrasound, chest X-ray, and retinal fundus images demonstrate that our proposed method achieves promising robustness and efficiency in segmenting images with unknown appearance shifts.

Conclusions: To address the appearance shift problem in clinically acquired medical images, we provide robust segmentation by using two complementary strategies. Our solution is general and amenable for deployment in clinical settings.

Keywords: Medical image segmentation; Self-supervised learning; Style transfer; Test-time adaptation.

MeSH terms

  • Female
  • Fundus Oculi
  • Humans
  • Image Processing, Computer-Assisted*
  • Pregnancy
  • Ultrasonography, Prenatal*