Objective evaluation of laparoscopic surgical skills in wet lab training based on motion analysis and machine learning

Langenbecks Arch Surg. 2022 Aug;407(5):2123-2132. doi: 10.1007/s00423-022-02505-9. Epub 2022 Apr 8.

Abstract

Background: Our aim was to build a skill assessment system, providing objective feedback to trainees based on the motion metrics of laparoscopic surgical instruments.

Methods: Participants performed tissue dissection around the aorta (tissue dissection task) and renal parenchymal closure (parenchymal-suturing task), using swine organs in a box trainer under a motion capture (Mocap) system. Two experts assessed the recorded movies, according to the formula of global operative assessment of laparoscopic skills (GOALS: score range, 5-25), and the mean scores were utilized as objective variables in the regression analyses. The correlations between mean GOALS scores and Mocap metrics were evaluated, and potential Mocap metrics with a Spearman's rank correlation coefficient value exceeding 0.4 were selected for each GOALS item estimation. Four regression algorithms, support vector regression (SVR), principal component analysis (PCA)-SVR, ridge regression, and partial least squares regression, were utilized for automatic GOALS estimation. Model validation was conducted by nested and repeated k-fold cross validation, and the mean absolute error (MAE) was calculated to evaluate the accuracy of each regression model.

Results: Forty-five urologic, 9 gastroenterological, and 3 gynecologic surgeons, 4 junior residents, and 9 medical students participated in the training. In both tasks, a positive correlation was observed between the speed-related parameters (e.g., velocity, velocity range, acceleration, jerk) and mean GOALS scores, with a negative correlation between the efficiency-related parameters (e.g., task time, path length, number of opening/closing operations) and mean GOALS scores. Among the 4 algorithms, SVR showed the highest accuracy in the tissue dissection task ([Formula: see text]), and PCA-SVR in the parenchymal-suturing task ([Formula: see text]), based on 100 iterations of the validation process of automatic GOALS estimation.

Conclusion: We developed a machine learning-based GOALS scoring system in wet lab training, with an error of approximately 1-2 points for the total score, and motion metrics that were explainable to trainees. Our future challenges are the further improvement of onsite GOALS feedback, exploring the educational benefit of our model and building an efficient training program.

Keywords: Laparoscopic surgery; Machine learning; Motion capture; Simulation training; Surgical education.

MeSH terms

  • Animals
  • Clinical Competence
  • Female
  • Humans
  • Internship and Residency*
  • Laparoscopy* / education
  • Machine Learning
  • Simulation Training*
  • Surgeons*
  • Swine