Enhancing OSA Assessment with Explainable AI

Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul:2023:1-6. doi: 10.1109/EMBC40787.2023.10341035.

Abstract

Explainable Artificial Intelligence (xAI) is a rapidly growing field that focuses on making deep learning models interpretable and understandable to human decision-makers. In this study, we introduce xAAEnet, a novel xAI model applied to the assessment of Obstructive Sleep Apnea (OSA) severity. OSA is a prevalent sleep disorder that can lead to numerous medical conditions and is currently assessed using the Apnea-Hypopnea Index (AHI). However, AHI has been criticized for its inability to accurately estimate the effect of OSAs on related medical conditions. To address this issue, we propose a human-centric xAI approach that emphasizes similarity between apneic events as a whole and reduces subjectivity in diagnosis by examining how the model makes its decisions. Our model was trained and tested on a dataset of 60 patients' Polysomnographic (PSG) recordings. Our results demonstrate that the proposed model, xAAEnet, outperforms models with traditional architectures such as convolutional regressor, autoencoder (AE), and variational autoencoder (VAE). This study highlights the potential of xAI in providing an objective OSA severity scoring method.Clinical relevance- This study provides an objective OSA severity scoring technique which could improve the management of apneic patients in clinical practice.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Polysomnography / methods
  • Sleep Apnea, Obstructive* / diagnosis