Explainable AI for Medical Image Analysis in Medical Cyber-Physical Systems: Enhancing Transparency and Trustworthiness of IoMT

IEEE J Biomed Health Inform. 2023 Nov 27:PP. doi: 10.1109/JBHI.2023.3336721. Online ahead of print.

Abstract

Medical image analysis plays a crucial role in healthcare systems of Internet of Medical Things (IoMT), aiding in the diagnosis, treatment planning, and monitoring of various diseases. With the increasing adoption of artificial intelligence (AI) techniques in medical image analysis, there is a growing need for transparency and trustworthiness in decision-making. This study explores the application of explainable AI (XAI) in the context of medical image analysis within medical cyber-physical systems (MCPS) to enhance transparency and trustworthiness. To this end, this study proposes an explainable framework that integrates machine learning and knowledge reasoning. The explainability of the model is realized when the framework evolution target feature results and reasoning results are the same and are relatively reliable. However, using these technologies also presents new challenges, including the need to ensure the security and privacy of patient data from IoMT. Therefore, attack detection is an essential aspect of MCPS security. For the MCPS model with only sensor attacks, the necessary and sufficient conditions for detecting attacks are given based on the definition of sparse observability. The corresponding attack detector and state estimator are designed by assuming that some IoMT sensors are under protection. It is expounded that the IoMT sensors under protection play an important role in improving the efficiency of attack detection and state estimation. The experimental results show that the XAI in the context of medical image analysis within MCPS improves the accuracy of lesion classification, effectively removes low-quality medical images, and realizes the explainability of recognition results. This helps doctors understand the logic of the system's decision-making and can choose whether to trust the results based on the explanation given by the framework.