Inter-rater reliability of case-note audit: a systematic review

J Health Serv Res Policy. 2007 Jul;12(3):173-80. doi: 10.1258/135581907781543012.

Abstract

Objective: The quality of clinical care is often assessed by retrospective examination of case-notes (charts, medical records). Our objective was to determine the inter-rater reliability of case-note audit.

Methods: We conducted a systematic review of the inter-rater reliability of case-note audit. Analysis was restricted to 26 papers reporting comparisons of two or three raters making independent judgements about the quality of care.

Results: Sixty-six separate comparisons were possible, since some papers reported more than one measurement of reliability. Mean kappa values ranged from 0.32 to 0.70. These may be inflated due to publication bias. Measured reliabilities were found to be higher for case-note reviews based on explicit, as opposed to implicit, criteria and for reviews that focused on outcome (including adverse effects) rather than process errors. We found an association between kappa and the prevalence of errors (poor quality care), suggesting alternatives such as tetrachoric and polychoric correlation coefficients be considered to assess inter-rater reliability.

Conclusions: Comparative studies should take into account the relationship between kappa and the prevalence of the events being measured.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review
  • Systematic Review

MeSH terms

  • Databases, Bibliographic
  • Decision Making
  • Endpoint Determination
  • Humans
  • Medical Audit / methods*
  • Observer Variation
  • Quality Assurance, Health Care / methods*
  • Reproducibility of Results*
  • Validation Studies as Topic