Self-supervised learning via cluster distance prediction for operating room context awareness

Int J Comput Assist Radiol Surg. 2022 Aug;17(8):1469-1476. doi: 10.1007/s11548-022-02629-9. Epub 2022 Apr 26.

Abstract

Purpose: Semantic segmentation and activity classification are key components to create intelligent surgical systems able to understand and assist clinical workflow. In the operating room, semantic segmentation is at the core of creating robots aware of clinical surroundings, whereas activity classification aims at understanding OR workflow at a higher level. State-of-the-art semantic segmentation and activity recognition approaches are fully supervised, which is not scalable. Self-supervision can decrease the amount of annotated data needed.

Methods: We propose a new 3D self-supervised task for OR scene understanding utilizing OR scene images captured with ToF cameras. Contrary to other self-supervised approaches, where handcrafted pretext tasks are focused on 2D image features, our proposed task consists of predicting relative 3D distance of image patches by exploiting the depth maps. By learning 3D spatial context, it generates discriminative features for our downstream tasks.

Results: Our approach is evaluated on two tasks and datasets containing multiview data captured from clinical scenarios. We demonstrate a noteworthy improvement in performance on both tasks, specifically on low-regime data where utility of self-supervised learning is the highest.

Conclusion: We propose a novel privacy-preserving self-supervised approach utilizing depth maps. Our proposed method shows performance on par with other self-supervised approaches and could be an interesting way to alleviate the burden of full supervision.

Keywords: Activity classification; OR scene understanding; Self-supervision; Semantic segmentation; da Vinci surgical system.

MeSH terms

  • Humans
  • Operating Rooms*
  • Supervised Machine Learning*