Bookmark locations and item response model selection in the presence of local item dependence

J Appl Meas. 2007;8(1):65-83.

Abstract

The bookmark standard setting procedure is a popular method for setting performance standards on state assessment programs. This study reanalyzed data from an application of the bookmark procedure to a passage-based test that used the Rasch model to create the item ordered booklet. Several problems were noted in this implementation of the bookmark procedure, including disagreement among the SMEs about the correct order of items in the bookmark booklet, performance level descriptions of the passing standard being based on passage difficulty as well as item difficulty, and the presence of local item dependence within reading passages. Bookmark item locations were recalculated for the IRT three-parameter model and the multidimensional bifactor model. The results showed that the order of item locations was very similar for all three models when items of high difficulty and low discrimination were excluded. However, the items whose positions were the most discrepant between models were not the items that the SMEs disagreed about the most in the original standard setting. The choice of latent trait model did not address problems of item order disagreement. Implications for the use of the bookmark method in the presence of local item dependence are discussed.

MeSH terms

  • Achievement*
  • Educational Measurement*
  • Humans
  • Models, Psychological*