An interpretable deep learning model for time-series electronic health records: Case study of delirium prediction in critical care

Artif Intell Med. 2023 Oct:144:102659. doi: 10.1016/j.artmed.2023.102659. Epub 2023 Sep 14.

Abstract

Deep Learning (DL) models have received increasing attention in the clinical setting, particularly in intensive care units (ICU). In this context, the interpretability of the outcomes estimated by the DL models is an essential step towards increasing adoption of DL models in clinical practice. To address this challenge, we propose an ante-hoc, interpretable neural network model. Our proposed model, named double self-attention architecture (DSA), uses two attention-based mechanisms, including self-attention and effective attention. It can capture the importance of input variables in general, as well as changes in importance along the time dimension for the outcome of interest. We evaluated our model using two real-world clinical datasets covering 22840 patients in predicting onset of delirium 12 h and 48 h in advance. Additionally, we compare the descriptive performance of our model with three post-hoc interpretable algorithms as well as with the opinion of clinicians based on the published literature and clinical experience. We find that our model covers the majority of the top-10 variables ranked by the other three post-hoc interpretable algorithms as well as the clinical opinion, with the advantage of taking into account both, the dependencies among variables as well as dependencies between varying time-steps. Finally, our results show that our model can improve descriptive performance without sacrificing predictive performance.

Keywords: Clinical explainable AI; Critical care; Delirium prediction; Double Self-Attention (DSA); Time dimension; Variable importance.

MeSH terms

  • Critical Care
  • Deep Learning*
  • Delirium* / diagnosis
  • Electronic Health Records
  • Humans
  • Neural Networks, Computer