Interpretable Differential Diagnosis of Non-COVID Viral Pneumonia, Lung Opacity and COVID-19 Using Tuned Transfer Learning and Explainable AI

Healthcare (Basel). 2023 Jan 31;11(3):410. doi: 10.3390/healthcare11030410.

Abstract

The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.

Keywords: COVID-19; X-ray imaging; explainable AI; lung opacity; non-COVID viral pneumonia; transfer learning.

Grants and funding

The authors are grateful to King Saud University, Riyadh, Saudi Arabia for funding this work through Researchers Supporting Project Number (RSP2023R18).