Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices

Neuroradiology. 2020 Nov;62(11):1515-1518. doi: 10.1007/s00234-020-02465-1. Epub 2020 Jun 4.

Abstract

Purpose: While neural networks gain popularity in medical research, attempts to make the decisions of a model explainable are often only made towards the end of the development process once a high predictive accuracy has been achieved.

Methods: In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor.

Results: Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data.

Conclusion: Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network's predictions into a clinical decision.

Keywords: Artificial intelligence; Deep learning; Explainability; Gliobastoma; Machine learning; Vestibular Schwannoma.

MeSH terms

  • Bayes Theorem
  • Brain Neoplasms / diagnostic imaging*
  • Contrast Media
  • Datasets as Topic
  • Diagnosis, Differential
  • Glioblastoma / diagnostic imaging
  • Humans
  • Image Interpretation, Computer-Assisted / methods*
  • Magnetic Resonance Imaging / methods*
  • Neural Networks, Computer*
  • Neurilemmoma / diagnostic imaging

Substances

  • Contrast Media