Connectome-based machine learning models are vulnerable to subtle data manipulations

Patterns (N Y). 2023 May 15;4(7):100756. doi: 10.1016/j.patter.2023.100756. eCollection 2023 Jul 14.

Abstract

Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is "trustworthiness," or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar (r = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications.

Keywords: adversarial attacks; connectomics; fMRI; functional connectivity; machine learning; predictive modeling; trustworthiness.