Adjusting for verification bias in diagnostic test evaluation: a Bayesian approach

Stat Med. 2008 Jun 15;27(13):2453-73. doi: 10.1002/sim.3099.

Abstract

Obtaining accurate estimates of the performance of a diagnostic test for some population of patients might be difficult when the sample of subjects used for this purpose is not representative for the whole population. Thus, in the motivating example of this paper a test is evaluated by comparing its results with those given by a gold standard procedure, which yields the disease status verification. However, this procedure is invasive and has a non-negligible risk of serious complications. Moreover, subjects are selected to undergo the gold standard based on some risk factors and the results of the test under study. The test performance estimates based on the selected sample of subjects are biased. This problem was presented in previous studies under the name of verification bias. The current paper introduces a Bayesian method to adjust for this bias, which can be regarded as a missing data problem. In addition, it addresses the case of non-ignorable verification bias. The proposed Bayesian estimation approach provides test performance estimates that are consistent with the results obtained using likelihood-based approach. In addition, the paper studies how valuable the statistical findings are from the perspective of clinical decision making.

Publication types

  • Comparative Study

MeSH terms

  • Bayes Theorem*
  • Coronary Artery Disease / diagnosis
  • Data Interpretation, Statistical*
  • Decision Making
  • Diagnostic Tests, Routine / standards*
  • Humans
  • Models, Statistical*
  • Tomography, Emission-Computed, Single-Photon