Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

Clin Ophthalmol. 2021 Jun 18:15:2573-2581. doi: 10.2147/OPTH.S312236. eCollection 2021.

Abstract

Background: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to determine the method with the best explanations for retinal OCT diagnosis.

Methods: A well-known deep learning model, Inception-v3 was trained to diagnose 3 retinal diseases - choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods.

Results: An attribution method based on Taylor series expansion, called Deep Taylor, was rated the highest by clinicians with a median rating of 3.85/5. It was followed by Guided backpropagation (GBP), and SHapley Additive exPlanations (SHAP).

Conclusion: Explanations from the top methods were able to highlight the structures for each disease - fluid accumulation for CNV, the boundaries of edema for DME, and bumpy areas of retinal pigment epithelium (RPE) for drusen. The most suitable method for a specific medical diagnosis task may be different from the one considered best for conventional tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study.

Keywords: choroidal neovascularization; deep learning; diabetic macular edema; drusen; explainable AI; image processing; machine learning; optical coherence tomography; retina.