Machine and deep learning for workflow recognition during surgery

Minim Invasive Ther Allied Technol. 2019 Apr;28(2):82-90. doi: 10.1080/13645706.2019.1584116. Epub 2019 Mar 8.

Abstract

Recent years have seen tremendous progress in artificial intelligence (AI), such as with the automatic and real-time recognition of objects and activities in videos in the field of computer vision. Due to its increasing digitalization, the operating room (OR) promises to directly benefit from this progress in the form of new assistance tools that can enhance the abilities and performance of surgical teams. Key for such tools is the recognition of the surgical workflow, because efficient assistance by an AI system requires this system to be aware of the surgical context, namely of all activities taking place inside the operating room. We present here how several recent techniques relying on machine and deep learning can be used to analyze the activities taking place during surgery, using videos captured from either endoscopic or ceiling-mounted cameras. We also present two potential clinical applications that we are developing at the University of Strasbourg with our clinical partners.

Keywords: Activity recognition; RGB-D video; clinician pose estimation; endoscopic video; operating room; surgical control tower; surgical workflow; tool detection.

Publication types

  • Review

MeSH terms

  • Algorithms
  • Artificial Intelligence
  • Deep Learning
  • Humans
  • Inventions
  • Machine Learning
  • Surgical Procedures, Operative*
  • Task Performance and Analysis*
  • Workflow*