Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study

Diagnostics (Basel). 2023 Oct 23;13(20):3281. doi: 10.3390/diagnostics13203281.

Abstract

Background: Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for human caregivers, we aimed to integrate them into the Philips Visual-Patient-avatar, an avatar-based visualization of patient monitoring.

Methods: We conducted a computer-based simulation study with 70 participants in 3 European university hospitals. We validated the vital sign prediction visualizations by testing their identification by anesthesiologists and intensivists. Each prediction visualization consisted of a condition (e.g., low blood pressure) and an urgency (a visual indication of the timespan in which the condition is expected to occur). To obtain qualitative user feedback, we also conducted standardized interviews and derived statements that participants later rated in an online survey.

Results: The mixed logistic regression model showed 77.9% (95% CI 73.2-82.0%) correct identification of prediction visualizations (i.e., condition and urgency both correctly identified) and 93.8% (95% CI 93.7-93.8%) for conditions only (i.e., without considering urgencies). A total of 49 out of 70 participants completed the online survey. The online survey participants agreed that the prediction visualizations were fun to use (32/49, 65.3%), and that they could imagine working with them in the future (30/49, 61.2%). They also agreed that identifying the urgencies was difficult (32/49, 65.3%).

Conclusions: This study found that care providers correctly identified >90% of the conditions (i.e., without considering urgencies). The accuracy of identification decreased when considering urgencies in addition to conditions. Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or improving the visualizations of urgency to enhance usability for human users.

Keywords: Visual Patient; avatar; machine learning; monitoring; predictive models; vital sign predictions.

Grants and funding

This work was supported by grants from Philips Research North America, Cambridge, MA, USA.