Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography

J Digit Imaging. 2022 Dec;35(6):1445-1462. doi: 10.1007/s10278-022-00666-z. Epub 2022 Jul 11.

Abstract

Limited availability of medical imaging datasets is a vital limitation when using "data hungry" deep learning to gain performance improvements. Dealing with the issue, transfer learning has become a de facto standard, where a pre-trained convolution neural network (CNN), typically on natural images (e.g., ImageNet), is finetuned on medical images. Meanwhile, pre-trained transformers, which are self-attention-based models, have become de facto standard in natural language processing (NLP) and state of the art in image classification due to their powerful transfer learning abilities. Inspired by the success of transformers in NLP and image classification, large-scale transformers (such as vision transformer) are trained on natural images. Based on these recent developments, this research aims to explore the efficacy of pre-trained natural image transformers for medical images. Specifically, we analyze pre-trained vision transformer on CheXpert and pediatric pneumonia dataset. We use CNN standard models including VGGNet and ResNet as baseline models. By examining the acquired representations and results, we discover that transfer learning from the pre-trained vision transformer shows improved results as compared to pre-trained CNN which demonstrates a greater transfer ability of the transformers in medical imaging.

Keywords: Chest X-rays; Classification; Transfer learning; Vision transformer.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Child
  • Humans
  • Machine Learning*
  • Neural Networks, Computer*
  • Radiography