Inter-rater reliability in the Paediatric Observation Priority Score (POPS)

Arch Dis Child. 2018 May;103(5):458-462. doi: 10.1136/archdischild-2017-314165. Epub 2018 Jan 12.

Abstract

Objective: The primary objective of this study was to determine the level of inter-rater reliability between nursing staff for the Paediatric Observation Priority Score (POPS).

Design: Retrospective observational study.

Setting: Single-centre paediatric emergency department.

Participants: 12 participants from a convenience sample of 21 nursing staff.

Interventions: Participants were shown video footage of three pre-recorded paediatric assessments and asked to record their own POPS for each child. The participants were blinded to the original, in-person POPS. Further data were gathered in the form of a questionnaire to determine the level of training and experience the candidate had using the POPS score prior to undertaking this study.

Main outcome measures: Inter-rater reliability among participants scoring of the POPS.

Results: Overall kappa value for case 1 was 0.74 (95% CI 0.605 to 0.865), case 2 was 1 (perfect agreement) and case 3 was 0.66 (95% CI 0.58 to 0.744).

Conclusion: This study suggests there is good inter-rater reliability between different nurses' use of POPS in assessing sick children in the emergency department.

Keywords: early warning score; emergency severity index; inter-rater reliability; nursing.

Publication types

  • Observational Study

MeSH terms

  • Child
  • Clinical Competence
  • Emergency Nursing / methods
  • Emergency Nursing / standards
  • Emergency Service, Hospital / standards*
  • England
  • Humans
  • Observer Variation
  • Pediatric Nursing / methods
  • Pediatric Nursing / standards*
  • Risk Assessment / methods
  • Single-Blind Method
  • Surveys and Questionnaires
  • Triage / methods
  • Triage / standards*
  • Video Recording