Does ignoring clustering in multicenter data influence the performance of prediction models? A simulation study

Stat Methods Med Res. 2018 Jun;27(6):1723-1736. doi: 10.1177/0962280216668555. Epub 2016 Sep 19.

Abstract

Clinical risk prediction models are increasingly being developed and validated on multicenter datasets. In this article, we present a comprehensive framework for the evaluation of the predictive performance of prediction models at the center level and the population level, considering population-averaged predictions, center-specific predictions, and predictions assuming an average random center effect. We demonstrated in a simulation study that calibration slopes do not only deviate from one because of over- or underfitting of patterns in the development dataset, but also as a result of the choice of the model (standard versus mixed effects logistic regression), the type of predictions (marginal versus conditional versus assuming an average random effect), and the level of model validation (center versus population). In particular, when data is heavily clustered (ICC 20%), center-specific predictions offer the best predictive performance at the population level and the center level. We recommend that models should reflect the data structure, while the level of model validation should reflect the research question.

Keywords: Mixed model; bias; calibration; clinical prediction model; discrimination; logistic regression; predictive performance.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Biomedical Research / statistics & numerical data
  • Cluster Analysis*
  • Logistic Models*
  • Multicenter Studies as Topic*