Common challenges and suggestions for risk of bias tool development: a systematic review of methodological studies

J Clin Epidemiol. 2024 Apr 24:171:111370. doi: 10.1016/j.jclinepi.2024.111370. Online ahead of print.

Abstract

Objectives: To review the findings of studies that have evaluated the design and/or usability of key risk of bias (RoB) tools for the assessment of RoB in primary studies, as categorized by the Library of Assessment Tools and InsTruments Used to assess Data validity in Evidence Synthesis Network (a searchable library of RoB tools for evidence synthesis): Prediction model Risk Of Bias ASessment Tool (PROBAST) , Risk of Bias-2 (RoB2), Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I), Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), Quality Assessment of Diagnostic Accuracy Studies-Comparative (QUADAS-C), Quality Assessment of Prognostic Accuracy Studies (QUAPAS), Risk Of Bias in Non-randomised Studies of Exposures (ROBINS-E), and the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) RoB checklist.

Study design and setting: Systematic review of methodological studies. We conducted a forward citation search from the primary report of each tool, to identify primary studies that aimed to evaluate the design and/or usability of the tool. Two reviewers assessed studies for inclusion. We extracted tool features into Microsoft Word and used NVivo for document analysis, comprising a mix of deductive and inductive approaches. We summarized findings within each tool and explored common findings across tools.

Results: We identified 13 tool evaluations meeting our inclusion criteria: PROBAST (3), RoB2 (3), ROBINS-I (4), and QUADAS-2 (3). We identified no evaluations for the other tools. Evaluations varied in clinical topic area, methodology, approach to bias assessment, and tool user background. Some had limitations affecting generalizability. We identified common findings across tools for 6/14 themes: (1) challenging items (eg, RoB2/ROBINS-I "deviations from intended interventions" domain), (2) overall RoB judgment (concerns with overall risk calculation in PROBAST/ROBINS-I), (3) tool usability (concerns about complexity), (4) time to complete tool (varying demands on time, eg, depending on number of outcomes assessed), (5) user agreement (varied across tools), and (6) recommendations for future use (eg, piloting) and development (add intermediate domain answer to QUADAS-2/PROBAST; provide clearer guidance for all tools). Of the other eight themes, seven only had findings for the QUADAS-2 tool, limiting comparison across tools, and one ("reorganization of questions") had no findings.

Conclusion: Evaluations of key RoB tools have posited common challenges and recommendations for tool use and development. These findings may be helpful to people who use or develop RoB tools. Guidance is necessary to support the design and implementation of future RoB tool evaluations.

Keywords: Evaluation; Quality assessment; Research methods; Risk of bias; RoB; Systematic reviews.