Reliability of a faculty evaluated scoring system for anesthesiology resident applicants (Original Investigation)

J Clin Anesth. 2016 Jun:31:131-6. doi: 10.1016/j.jclinane.2016.02.015. Epub 2016 Apr 15.

Abstract

Study objective: To assess reliability and reproducibility of a recently instituted anesthesiology resident applicant interview scoring system at our own institution.

Design: Retrospective evaluation of 2 years of interview data with a newly implemented scoring system using randomly assigned interviewing faculty.

Setting: Interview scoring evaluations were completed as standard practice in a large academic anesthesiology department.

Subjects: All anesthesiology resident applicants interviewed over the 2013/14 and 2014/15 seasons by a stable cohort of faculty interviewers. Data collection blinded for both interviewers and interviewees.

Interventions: None for purposes of study - collation of blinded data already used as standard practice during interview process and analysis.

Measurements: None specific to study.

Main results: Good inter-rater faculty reliability of interview scoring (day-of) and excellent inter-faculty reliability of application review (pre-interview).

Conclusions: Development of a department-specific interview scoring system including many elements beyond traditional standardized tests shows good-excellent reliability of faculty scoring of both the interview itself (including non-technical skills) and the application resume.

Keywords: Applicant; Faculty; Interview; Reliability; Reproducibility; Resident.

MeSH terms

  • Anesthesiology / education*
  • Clinical Competence / statistics & numerical data*
  • Educational Measurement / methods*
  • Educational Measurement / statistics & numerical data*
  • Faculty, Medical*
  • Humans
  • Internship and Residency*
  • Reproducibility of Results
  • Retrospective Studies