Evaluating the Quality of Narrative Feedback for Entrustable Professional Activities in a Surgery Residency Program

Ann Surg. 2024 Apr 25. doi: 10.1097/SLA.0000000000006308. Online ahead of print.

Abstract

Objective: We assessed the quality of narrative feedback given to surgical residents during the first five years of Competency-Based Medical Education (CBME) implementation.

Summary background data: CBME requires ongoing formative assessments and feedback on learners' performance.

Methods: We conducted a retrospective cross-sectional study using assessments of Entrustable Professional Activities (EPAs) in the Surgical Foundations curriculum at Queen's University from 2017-2022. Two raters independently evaluated quality of narrative feedback using the Quality of Assessment of Learning (QuAL) Score (0-5).

Results: A total of 3,900 EPA assessments were completed over 5 years. Fifty-seven percent (2229/3900) of assessments had narrative feedback documented with a mean QuAL score of 2.16±1.49. Of these, 1614 (72.4%) provided evidence about the resident's performance, 951 (42.7%) provided suggestions for improvement, and 499/2229 (22.4%) connected suggestions to the evidence. There was no meaningful change in narrative feedback quality over time (r=0.067, P=0.002). Variables associated with lower quality of narrative feedback include: Attending role (2.04±1.48) compared to medical student (3.13±1.12, P<0.001) and clinical fellow (2.47±1.54, P<0.001), concordant specialties between the assessor and learner (2.06±1.50 vs. 2.21±1.49, P=0.025), completion of the assessment one month or more after the encounter versus one week (1.85±1.48 vs. 2.23±1.49, P<0.001), and resident entrustment versus not entrusted to perform the assessed EPA (2.13±1.45 vs. 2.35±1.66; P=0.008). The quality of narrative feedback was similar for assessments completed under direct and indirect observation (2.18±1.47 vs. 2.06±1.54; P=0.153).

Conclusions: Just over half of the EPA assessments of surgery residents contained narrative feedback with overall fair quality. There was no meaningful change in the quality of feedback over 5 years. These findings prompt future research and faculty development.