Towards Building a Trustworthy Deep Learning Framework for Medical Image Analysis

Sensors (Basel). 2023 Sep 27;23(19):8122. doi: 10.3390/s23198122.

Abstract

Computer vision and deep learning have the potential to improve medical artificial intelligence (AI) by assisting in diagnosis, prediction, and prognosis. However, the application of deep learning to medical image analysis is challenging due to limited data availability and imbalanced data. While model performance is undoubtedly essential for medical image analysis, model trust is equally important. To address these challenges, we propose TRUDLMIA, a trustworthy deep learning framework for medical image analysis, which leverages image features learned through self-supervised learning and utilizes a novel surrogate loss function to build trustworthy models with optimal performance. The framework is validated on three benchmark data sets for detecting pneumonia, COVID-19, and melanoma, and the created models prove to be highly competitive, even outperforming those designed specifically for the tasks. Furthermore, we conduct ablation studies, cross-validation, and result visualization and demonstrate the contribution of proposed modules to both model performance (up to 21%) and model trust (up to 5%). We expect that the proposed framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises, improving patient outcomes, increasing diagnostic accuracy, and enhancing the overall quality of healthcare delivery.

Keywords: AUC; COVID-19; computer-aided diagnosis; contrastive learning; feature learning; self-supervised learning; trustworthiness.

MeSH terms

  • Artificial Intelligence
  • Benchmarking
  • COVID-19* / diagnosis
  • Deep Learning*
  • Humans
  • Melanoma*