Objective: To develop an algorithm for objective evaluation of distraction of surgeons during robot-assisted surgery (RAS).
Materials and methods: Electroencephalogram (EEG) of 22 medical students was recorded while performing five key tasks on the robotic surgical simulator: Instrument Control, Ball Placement, Spatial Control II, Fourth Arm Tissue Retraction, and Hands-on Surgical Training Tasks. All students completed the Surgery Task Load Index (SURG-TLX), which includes one domain for subjective assessment of distraction (scale: 1-20). Scores were divided into low (score 1-6, subjective label: 1), intermediate (score 7-12, subjective label: 2), and high distraction (score 13-20, subjective label: 3). These cut-off values were arbitrarily considered based on a verbal assessment of participants and experienced surgeons. A Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings from the medical students and used to classify their distraction levels. The accuracy of our method was determined by comparing the subjective distraction scores on SURG-TLX and the results from the proposed classification algorithm. Also, Pearson correlation was utilized to assess the relationship between performance scores (generated by the simulator) and distraction (Subjective assessment scores).
Results: The proposed end-to-end model classified distraction into low, intermediate, and high with 94%, 89%, and 95% accuracy, respectively. We found a significant negative correlation (r = -0.21; p = 0.003) between performance and SURG-TLX distraction scores.
Conclusions: Herein we report, to our knowledge, the first objective method to assess and quantify distraction while performing robotic surgical tasks on the robotic simulator, which may improve patient safety. Validation in the clinical setting is required.
Keywords: Deep learning; Distraction; Electroencephalogram (EEG); Patient safety; Robot-assisted surgery.
Copyright © 2021 Elsevier B.V. All rights reserved.