Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT)

Med Image Comput Comput Assist Interv. 2022 Sep:13434:556-566. doi: 10.1007/978-3-031-16440-8_53. Epub 2022 Sep 16.

Abstract

Vision transformers efficiently model long-range context and thus have demonstrated impressive accuracy gains in several image analysis tasks including segmentation. However, such methods need large labeled datasets for training, which is hard to obtain for medical image analysis. Self-supervised learning (SSL) has demonstrated success in medical image segmentation using convolutional networks. In this work, we developed a self-distillation learning with masked image modeling method to perform SSL for vision transformers (SMIT) applied to 3D multi-organ segmentation from CT and MRI. Our contribution combines a dense pixel-wise regression pretext task performed within masked patches called masked image prediction with masked patch token distillation to pre-train vision transformers. Our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks. Unlike prior methods, which typically used image sets arising from disease sites and imaging modalities corresponding to the target tasks, we used 3,643 CT scans (602,708 images) arising from head and neck, lung, and kidney cancers as well as COVID-19 for pre-training and applied it to abdominal organs segmentation from MRI pancreatic cancer patients as well as publicly available 13 different abdominal organs segmentation from CT. Our method showed clear accuracy improvement (average DSC of 0.875 from MRI and 0.878 from CT) with reduced requirement for fine-tuning datasets over commonly used pretext tasks. Extensive comparisons against multiple current SSL methods were done. Our code is available at: https://github.com/harveerar/SMIT.git.

Keywords: Self-supervised learning; masked embedding transformer; masked image modeling; segmentation; self-distillation.