Machine learning approaches for cardiovascular hypertension stage estimation using photoplethysmography and clinical features

Front Cardiovasc Med. 2023 Dec 4:10:1285066. doi: 10.3389/fcvm.2023.1285066. eCollection 2023.

Abstract

Cardiovascular diseases (CVDs) are a leading cause of death worldwide, with hypertension emerging as a significant risk factor. Early detection and treatment of hypertension can significantly reduce the risk of developing CVDs and related complications. This work proposes a novel approach employing features extracted from the acceleration photoplethysmography (APG) waveform, alongside clinical parameters, to estimate different stages of hypertension. The current study used a publicly available dataset and a novel feature extraction algorithm to extract APG waveform features. Three distinct supervised machine learning algorithms were employed in the classification task, namely: Decision Tree (DT), Linear Discriminant Analysis (LDA), and Linear Support Vector Machine (LSVM). Results indicate that the DT model achieved exceptional training accuracy of 100% during cross-validation and maintained a high accuracy of 96.87% on the test dataset. The LDA model demonstrated competitive performance, yielding 85.02% accuracy during cross-validation and 84.37% on the test dataset. Meanwhile, the LSVM model exhibited robust accuracy, achieving 88.77% during cross-validation and 93.75% on the test dataset. These findings underscore the potential of APG analysis as a valuable tool for clinicians in estimating hypertension stages, supporting the need for early detection and intervention. This investigation not only advances hypertension risk assessment but also advocates for enhanced cardiovascular healthcare outcomes.

Keywords: acceleration photoplethysmography; cardiovascular; clinical features; feature engineering; hypertension; machine learning; photoplethysmography.

Associated data

  • figshare/10.6084/m9.figshare.5459299

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article.