Achieving high inter-rater reliability in establishing data labels: a retrospective chart review study

BMJ Open Qual. 2024 Apr 17;13(2):e002722. doi: 10.1136/bmjoq-2023-002722.

Abstract

Background: In medical research, the effectiveness of machine learning algorithms depends heavily on the accuracy of labeled data. This study aimed to assess inter-rater reliability (IRR) in a retrospective electronic medical chart review to create high quality labeled data on comorbidities and adverse events (AEs).

Methods: Six registered nurses with diverse clinical backgrounds reviewed patient charts, extracted data on 20 predefined comorbidities and 18 AEs. All reviewers underwent four iterative rounds of training aimed to enhance accuracy and foster consensus. Periodic monitoring was conducted at the beginning, middle, and end of the testing phase to ensure data quality. Weighted Kappa coefficients were calculated with their associated 95% confidence intervals (CIs).

Results: Seventy patient charts were reviewed. The overall agreement, measured by Conger's Kappa, was 0.80 (95% CI: 0.78-0.82). IRR scores remained consistently high (ranging from 0.70 to 0.87) throughout each phase.

Conclusion: Our study suggests the detailed manual for chart review and structured training regimen resulted in a consistently high level of agreement among our reviewers during the chart review process. This establishes a robust foundation for generating high-quality labeled data, thereby enhancing the potential for developing accurate machine learning algorithms.

Keywords: Adverse events, epidemiology and detection; Chart review methodologies; Patient safety.

MeSH terms

  • Consensus
  • Data Accuracy*
  • Humans
  • Reproducibility of Results
  • Retrospective Studies