ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU

J Biomed Inform. 2019 Oct:98:103269. doi: 10.1016/j.jbi.2019.103269. Epub 2019 Aug 17.

Abstract

To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight into the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose a deep multi-scale convolutional architecture trained on the Medical Information Mart for Intensive Care III (MIMIC-III) for mortality prediction, and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the network. Results show our model attains a ROC AUC of 0.8735 (± 0.0025) which is competitive with the state of the art of Deep Learning mortality models trained on MIMIC-III data, while remaining interpretable. Supporting code can be found at https://github.com/williamcaicedo/ISeeU.

Keywords: Deep learning; ICU; MIMIC-III; Shapley Values.

MeSH terms

  • Algorithms
  • Area Under Curve
  • Critical Care / methods*
  • Deep Learning*
  • Electronic Health Records
  • Hospital Mortality*
  • Humans
  • Intensive Care Units*
  • Machine Learning
  • Medical Informatics / methods*
  • Neural Networks, Computer
  • ROC Curve
  • Reproducibility of Results
  • Retrospective Studies
  • Sensitivity and Specificity