Applying objective metrics to neurosurgical skill development with simulation and spaced repetition learning

J Neurosurg. 2023 Mar 10;139(4):1092-1100. doi: 10.3171/2023.1.JNS222651. Print 2023 Oct 1.

Abstract

Objective: Surgical skills laboratories augment educational training by deepening one's understanding of anatomy and allowing the safe practice of technical skills. Novel, high-fidelity, cadaver-free simulators provide an opportunity to increase access to skills laboratory training. The neurosurgical field has historically evaluated skill by subjective assessment or outcome measures, as opposed to process measures with objective, quantitative indicators of technical skill and progression. The authors conducted a pilot training module with spaced repetition learning concepts to evaluate its feasibility and impact on proficiency.

Methods: The 6-week module used a simulator of a pterional approach representing skull, dura mater, cranial nerves, and arteries (UpSurgeOn S.r.l.). Neurosurgery residents at an academic tertiary hospital completed a video-recorded baseline examination, performing supraorbital and pterional craniotomies, dural opening, suturing, and anatomical identification under a microscope. Participation in the full 6-week module was voluntary, which precluded randomizing by class year. The intervention group participated in four additional faculty-guided trainings. In the 6th week, all residents (intervention and control) repeated the initial examination with video recording. Videos were evaluated by three neurosurgical attendings who were not affiliated with the institution and who were blinded to participant grouping and year. Scores were assigned via Global Rating Scales (GRSs) and Task-based Specific Checklists (TSCs) previously built for craniotomy (cGRS, cTSC) and microsurgical exploration (mGRS, mTSC).

Results: Fifteen residents participated (8 intervention, 7 control). The intervention group included a greater number of junior residents (postgraduate years 1-3; 7/8) compared to the control group (1/7). External evaluators had internal consistency within 0.5% (kappa probability > Z of 0.00001). The total average time improved by 5:42 minutes (p < 0.003; intervention, 6:05, p = 0.07; control, 5:15, p = 0.001). The intervention group began with lower scores in all categories and surpassed the comparison group in cGRS (10.93 to 13.6/16) and cTSC (4.0 to 7.4/10). Percent improvements for the intervention group were cGRS 25% (p = 0.02), cTSC 84% (p = 0.002), mGRS 18% (p = 0.003), and mTSC 52% (p = 0.037). For controls, improvements were cGRS 4% (p = 0.19), cTSC 0.0% (p > 0.99), mGRS 6% (p = 0.07), and mTSC 31% (p = 0.029).

Conclusions: Participants who underwent a 6-week simulation course showed significant objective improvement in technical indicators, particularly individuals who were early in their training. Small, nonrandomized grouping limits generalizability regarding degree of impact; however, introducing objective performance metrics during spaced repetition simulation would undoubtedly improve training. A larger multiinstitutional randomized controlled study will help elucidate the value of this educational method.

Keywords: education; neurosurgery; simulation; spaced repetition; surgical training; technical skills.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Clinical Competence
  • Craniotomy
  • Curriculum
  • Humans
  • Internship and Residency*
  • Neurosurgical Procedures / methods
  • Simulation Training* / methods
  • Video Recording