Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization

Phys Med. 2021 Mar:83:108-121. doi: 10.1016/j.ejmp.2021.03.009. Epub 2021 Mar 22.

Abstract

Over the last decade there has been an extensive evolution in the Artificial Intelligence (AI) field. Modern radiation oncology is based on the exploitation of advanced computational methods aiming to personalization and high diagnostic and therapeutic precision. The quantity of the available imaging data and the increased developments of Machine Learning (ML), particularly Deep Learning (DL), triggered the research on uncovering "hidden" biomarkers and quantitative features from anatomical and functional medical images. Deep Neural Networks (DNN) have achieved outstanding performance and broad implementation in image processing tasks. Lately, DNNs have been considered for radiomics and their potentials for explainable AI (XAI) may help classification and prediction in clinical practice. However, most of them are using limited datasets and lack generalized applicability. In this study we review the basics of radiomics feature extraction, DNNs in image analysis, and major interpretability methods that help enable explainable AI. Furthermore, we discuss the crucial requirement of multicenter recruitment of large datasets, increasing the biomarkers variability, so as to establish the potential clinical value of radiomics and the development of robust explainable AI models.

Keywords: Convolutional neural network; Data curation; Deep learning; Explainability; Interpretability; Machine learning; Radiomics.

Publication types

  • Review

MeSH terms

  • Artificial Intelligence*
  • Deep Learning*
  • Image Processing, Computer-Assisted
  • Machine Learning
  • Multicenter Studies as Topic
  • Neural Networks, Computer