A Comparison of CT Perfusion Output of RapidAI and Viz.ai Software in the Evaluation of Acute Ischemic Stroke

AJNR Am J Neuroradiol. 2024 Apr 4. doi: 10.3174/ajnr.A8196. Online ahead of print.

Abstract

Background and purpose: Automated CTP postprocessing packages have been developed for managing acute ischemic stroke. These packages use image processing techniques to identify the ischemic core and penumbra. This study aimed to investigate the agreement of decision-making rules and output derived from RapidAI and Viz.ai software packages in early and late time windows and to identify predictors of inadequate quality CTP studies.

Materials and methods: One hundred twenty-nine patients with acute ischemic stroke who had CTP performed on presentation were analyzed by RapidAI and Viz.ai. Volumetric outputs were compared between packages by performing Spearman rank-order correlation and Wilcoxon signed-rank tests with subanalysis performed at early (<6 hours) and extended (>6 hours) time windows. The concordance of selecting patients on the basis of DAWN and DEFUSE 3 eligibility criteria was assessed using the McNemar test.

Results: One hundred eight of 129 patients were found to have adequate-quality studies. Spearman rank-order correlation coefficients were calculated on time-to-maximum >6-second volume, time-to-maximum >10-second volume, CBF <30% volume, mismatch volume, and mismatch ratio between both software packages with correlation coefficients of 0.82, 0.65, 0.77, 0.78, 0.59, respectively. The Wilcoxon signed-rank test was also performed on time-to-maximum >6-second volume, time-to-maximum >10-second volume, CBF <30% volume, mismatch volume, and mismatch ratio with P values of .30, .016, <.001, .03, <.001, respectively. In a 1-sided test, CBF <30% was greater in Viz.ai (P < .001). Although this finding resulted in statistically significant differences, it did not cause clinically significant differences when applied to the DAWN and DEFUSE 3 criteria. A lower ejection fraction predicted an inadequate study in both software packages (P = .018; 95% CI, 0.01-0.113) and (P = .024; 95% CI, 0.008-0.109) for RapidAI and Viz.ai, respectively.

Conclusions: Penumbra and infarct core predictions between Rapid and Viz.ai correlated but were statistically different and resulted in equivalent triage using DAWN and DEFUSE3 criteria. Viz.ai predicted higher ischemic core volumes than RapidAI. Viz.ai predicted lower combined core and penumbra values than RapidAI at lower volumes and higher estimates than RapidAI at higher volumes. Clinicians should be cautious when using different software packages for clinical decision-making.