How Well Do Self-Supervised Models Transfer to Medical Imaging?

J Imaging. 2022 Dec 1;8(12):320. doi: 10.3390/jimaging8120320.

Abstract

Self-supervised learning approaches have seen success transferring between similar medical imaging datasets, however there has been no large scale attempt to compare the transferability of self-supervised models against each other on medical images. In this study, we compare the generalisability of seven self-supervised models, two of which were trained in-domain, against supervised baselines across eight different medical datasets. We find that ImageNet pretrained self-supervised models are more generalisable than their supervised counterparts, scoring up to 10% better on medical classification tasks. The two in-domain pretrained models outperformed other models by over 20% on in-domain tasks, however they suffered significant loss of accuracy on all other tasks. Our investigation of the feature representations suggests that this trend may be due to the models learning to focus too heavily on specific areas.

Keywords: BYOL; MoCo; PIRL; SWaV; SimCLR; image classification; medical imaging; self-supervised learning.

Grants and funding

This research received funding from the Imperial College London Open Access Fund.