Using item response theory to appraise key feature examinations for clinical reasoning

Med Teach. 2022 Nov;44(11):1253-1259. doi: 10.1080/0142159X.2022.2077716. Epub 2022 Jun 2.

Abstract

Background: Validation of examinations is usually based on classical test theory. In this study, we analysed a key feature examination according to item response theory and compared the results with those of a classical test theory approach.

Methods: Over the course of five years, 805 fourth-year undergraduate students took a key feature examination on general medicine consisting of 30 items. Analyses were run according to a classical test theory approach as well as using item response theory. Classical test theory analyses are reported as item difficulty, discriminatory power, and Cronbach's alpha while item response theory analyses are presented as item characteristics curves, item information curves and a test information function.

Results: According to classical test theory findings, the examination was labelled as easy. Analyses according to item response theory more specifically indicated that the examination was most suited to identify struggling students. Furthermore, the analysis allowed for adapting the examination to specific ability ranges by removing items, as well as comparing multiple samples with varying ability ranges.

Conclusions: Item response theory analyses revealed results not yielded by classical test theory. Thus, both approaches should be routinely combined to increase the information yield of examination data.

Keywords: Medical education; clinical reasoning; item response theory; key feature; test theory.

MeSH terms

  • Clinical Reasoning*
  • Educational Measurement* / methods
  • Humans
  • Psychometrics