Opening the black box: interpretability of machine learning algorithms in electrocardiography

Philos Trans A Math Phys Eng Sci. 2021 Dec 13;379(2212):20200253. doi: 10.1098/rsta.2020.0253. Epub 2021 Oct 25.

Abstract

Recent studies have suggested that cardiac abnormalities can be detected from the electrocardiogram (ECG) using deep machine learning (DL) models. However, most DL algorithms lack interpretability, since they do not provide any justification for their decisions. In this study, we designed two new frameworks to interpret the classification results of DL algorithms trained for 12-lead ECG classification. The frameworks allow us to highlight not only the ECG samples that contributed most to the classification, but also which between the P-wave, QRS complex and T-wave, hereafter simply called 'waves', were the most relevant for the diagnosis. The frameworks were designed to be compatible with any DL model, including the ones already trained. The frameworks were tested on a selected Deep Neural Network, trained on a publicly available dataset, to automatically classify 24 cardiac abnormalities from 12-lead ECG signals. Experimental results showed that the frameworks were able to detect the most relevant ECG waves contributing to the classification. Often the network relied on portions of the ECG which are also considered by cardiologists to detect the same cardiac abnormalities, but this was not always the case. In conclusion, the proposed frameworks may unveil whether the network relies on features which are clinically significant for the detection of cardiac abnormalities from 12-lead ECG signals, thus increasing the trust in the DL models. This article is part of the theme issue 'Advanced computation in cardiovascular physiology: new challenges and opportunities'.

Keywords: electrocardiogram; explainable artificial intelligence; machine learning.

MeSH terms

  • Algorithms*
  • Arrhythmias, Cardiac
  • Electrocardiography*
  • Humans
  • Machine Learning
  • Neural Networks, Computer