Intra-rater and inter-rater reliability of a medical record abstraction study on transition of care after childhood cancer

PLoS One. 2015 May 22;10(5):e0124290. doi: 10.1371/journal.pone.0124290. eCollection 2015.

Abstract

Background: The abstraction of data from medical records is a widespread practice in epidemiological research. However, studies using this means of data collection rarely report reliability. Within the Transition after Childhood Cancer Study (TaCC) which is based on a medical record abstraction, we conducted a second independent abstraction of data with the aim to assess a) intra-rater reliability of one rater at two time points; b) the possible learning effects between these two time points compared to a gold-standard; and c) inter-rater reliability.

Method: Within the TaCC study we conducted a systematic medical record abstraction in the 9 Swiss clinics with pediatric oncology wards. In a second phase we selected a subsample of medical records in 3 clinics to conduct a second independent abstraction. We then assessed intra-rater reliability at two time points, the learning effect over time (comparing each rater at two time-points with a gold-standard) and the inter-rater reliability of a selected number of variables. We calculated percentage agreement and Cohen's kappa.

Findings: For the assessment of the intra-rater reliability we included 154 records (80 for rater 1; 74 for rater 2). For the inter-rater reliability we could include 70 records. Intra-rater reliability was substantial to excellent (Cohen's kappa 0-6-0.8) with an observed percentage agreement of 75%-95%. In all variables learning effects were observed. Inter-rater reliability was substantial to excellent (Cohen's kappa 0.70-0.83) with high agreement ranging from 86% to 100%.

Conclusions: Our study showed that data abstracted from medical records are reliable. Investigating intra-rater and inter-rater reliability can give confidence to draw conclusions from the abstracted data and increase data quality by minimizing systematic errors.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Child
  • Data Collection / standards
  • Humans
  • Medical Records*
  • Neoplasms*
  • Observer Variation
  • Reproducibility of Results

Grants and funding

This work was supported by the Swiss National Science Foundation (Ambizione grant PZ00P3_121682/1 and PZ00P3-141722 to GM); the Swiss Cancer League (grant KLS-01605-10-2004, KLS-2215-02-2008, KFS-02631-08-2010, KLS-02783-02-2011); Cancer League Bern; and Stiftung zur Krebsbekämpfung. The work of the Swiss Childhood Cancer Registry is supported by the Swiss Paediatric Oncology Group (www.spog.ch), Schweizerische Konferenz der kantonalen Gesundheitsdirektorinnen und –direktoren (www.gdk-cds.ch), Swiss Cancer Research (www.krebsforschung.ch), Kinderkrebshilfe Schweiz (www.kinderkrebshilfe.ch), Ernst-Göhner Stiftung, Stiftung Domarena, and National Institute of Cancer Epidemiology and Registration (www.nicer.ch). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.