The role of raters threshold in estimating interrater agreement

Psychol Methods. 2021 Oct;26(5):622-634. doi: 10.1037/met0000416.

Abstract

The evaluation of agreement among experts in a classification task is crucial in many situations (e.g., medical and psychological diagnosis, legal reports). Traditional indexes used to estimate interrater agreement (such as Cohen's j) simply count the number of observed agreements and correct them by removing chance agreements. In this article, we introduce a new theoretical framework for the evaluation of interrater agreement based on the possibility of adjusting the observed classifications conducted by the raters. This framework refers to the introduction and formalization of two concepts involved in the classification task: (a) the belonging measure of an object to a category and (b) the rater's belonging threshold, which is the minimally sufficient value of the belonging measure at which the rater will classify an object into a category. These factors are ignored by traditional indexes for interrater agreement, though their role may be decisive. Two Bayesian models are tested through a Monte Carlo simulation study to evaluate the accuracy of the new methodology for estimating raters' threshold and the actual degree of agreement between two independent raters. Results show that the computation of traditional indexes for interrater agreement on the adjusted classifications leads to a more accurate estimation of the experts' actual agreement. This improvement is greater when a large difference between raters' belonging thresholds is observed; when the difference is small, the proposed method provides similar results to those obtained in the simple observed classifications. Finally, an empirical application to the field of psychological assessment is presented to show how the method could be used in practice. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

MeSH terms

  • Bayes Theorem*
  • Computer Simulation
  • Humans
  • Observer Variation
  • Reproducibility of Results