Inter-Coder Agreement in One-to-Many Classification: Fuzzy Kappa

PLoS One. 2016 Mar 2;11(3):e0149787. doi: 10.1371/journal.pone.0149787. eCollection 2016.

Abstract

Content analysis involves classification of textual, visual, or audio data. The inter-coder agreement is estimated by making two or more coders to classify the same data units, with subsequent comparison of their results. The existing methods of agreement estimation, e.g., Cohen's kappa, require that coders place each unit of content into one and only one category (one-to-one coding) from the pre-established set of categories. However, in certain data domains (e.g., maps, photographs, databases of texts and images), this requirement seems overly restrictive. The restriction could be lifted, provided that there is a measure to calculate the inter-coder agreement in the one-to-many protocol. Building on the existing approaches to one-to-many coding in geography and biomedicine, such measure, fuzzy kappa, which is an extension of Cohen's kappa, is proposed. It is argued that the measure is especially compatible with data from certain domains, when holistic reasoning of human coders is utilized in order to describe the data and access the meaning of communication.

MeSH terms

  • Clinical Coding / methods*
  • Communication
  • Databases, Factual*
  • Humans

Grants and funding

The authors have no support or funding to report.