COVID-19 Automatic Diagnosis With Radiographic Imaging: Explainable Attention Transfer Deep Neural Networks

IEEE J Biomed Health Inform. 2021 Jul;25(7):2376-2387. doi: 10.1109/JBHI.2021.3074893. Epub 2021 Jul 27.

Abstract

Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • COVID-19 / diagnostic imaging*
  • Deep Learning*
  • Humans
  • Lung / diagnostic imaging
  • Radiographic Image Interpretation, Computer-Assisted / methods*
  • SARS-CoV-2
  • Tomography, X-Ray Computed / methods*

Grants and funding

The work was supported by The Wallace H. Coulter Distinguished Faculty Fellow, Amazon Faculty Research Fellow, Microsoft Azure Cloud Grant, and Petit Institute Faculty Fellow awards to Professor Wang. The content of this article is solely the responsibility of the authors.