Study objective: To assess reliability and reproducibility of a recently instituted anesthesiology resident applicant interview scoring system at our own institution.
Design: Retrospective evaluation of 2 years of interview data with a newly implemented scoring system using randomly assigned interviewing faculty.
Setting: Interview scoring evaluations were completed as standard practice in a large academic anesthesiology department.
Subjects: All anesthesiology resident applicants interviewed over the 2013/14 and 2014/15 seasons by a stable cohort of faculty interviewers. Data collection blinded for both interviewers and interviewees.
Interventions: None for purposes of study - collation of blinded data already used as standard practice during interview process and analysis.
Measurements: None specific to study.
Main results: Good inter-rater faculty reliability of interview scoring (day-of) and excellent inter-faculty reliability of application review (pre-interview).
Conclusions: Development of a department-specific interview scoring system including many elements beyond traditional standardized tests shows good-excellent reliability of faculty scoring of both the interview itself (including non-technical skills) and the application resume.
Keywords: Applicant; Faculty; Interview; Reliability; Reproducibility; Resident.
Copyright © 2016 Elsevier Inc. All rights reserved.