Bootstrap-Calibrated Interval Estimates for Latent Variable Scores in Item Response Theory

Psychometrika. 2018 Jun;83(2):333-354. doi: 10.1007/s11336-017-9582-9. Epub 2017 Sep 6.

Abstract

In most item response theory applications, model parameters need to be first calibrated from sample data. Latent variable (LV) scores calculated using estimated parameters are thus subject to sampling error inherited from the calibration stage. In this article, we propose a resampling-based method, namely bootstrap calibration (BC), to reduce the impact of the carryover sampling error on the interval estimates of LV scores. BC modifies the quantile of the plug-in posterior, i.e., the posterior distribution of the LV evaluated at the estimated model parameters, to better match the corresponding quantile of the true posterior, i.e., the posterior distribution evaluated at the true model parameters, over repeated sampling of calibration data. Furthermore, to achieve better coverage of the fixed true LV score, we explore the use of BC in conjunction with Jeffreys' prior. We investigate the finite-sample performance of BC via Monte Carlo simulations and apply it to two empirical data examples.

Keywords: bootstrap; item response theory; predictive inference; scoring.

MeSH terms

  • Attitude
  • Bayes Theorem
  • Computer Simulation
  • Educational Measurement
  • Humans
  • Models, Psychological*
  • Monte Carlo Method
  • Psychometrics / methods*
  • Reaction Time*
  • Surveys and Questionnaires