Scale Separation Reliability: What Does It Mean in the Context of Comparative Judgment?

Appl Psychol Meas. 2018 Sep;42(6):428-445. doi: 10.1177/0146621617748321. Epub 2017 Dec 31.

Abstract

Comparative judgment (CJ) is an alternative method for assessing competences based on Thurstone's law of comparative judgment. Assessors are asked to compare pairs of students work (representations) and judge which one is better on a certain competence. These judgments are analyzed using the Bradly-Terry-Luce model resulting in logit estimates for the representations. In this context, the Scale Separation Reliability (SSR), coming from Rasch modeling, is typically used as reliability measure. But, to the knowledge of the authors, it has never been systematically investigated if the meaning of the SSR can be transferred from Rasch to CJ. As the meaning of the reliability is an important question for both assessment theory and practice, the current study looks into this. A meta-analysis is performed on 26 CJ assessments. For every assessment, split-halves are performed based on assessor. The rank orders of the whole assessment and the halves are correlated and compared with SSR values using Bland-Altman plots. The correlation between the halves of an assessment was compared with the SSR of the whole assessment showing that the SSR is a good measure for split-half reliability. Comparing the SSR of one of the halves with the correlation between the two respective halves showed that the SSR can also be interpreted as an interrater correlation. Regarding SSR as expressing a correlation with the truth, the results are mixed.

Keywords: IRT; Rasch measurement; Scale Separation Reliability (SSR); comparative judgment (CJ); reliability theory.