Repetitive motion compensation for real time intraoperative video processing

Med Image Anal. 2019 Apr:53:1-10. doi: 10.1016/j.media.2018.12.005. Epub 2019 Jan 4.

Abstract

In this paper, we present a motion compensation algorithm dedicated to video processing during neurosurgery. After craniotomy, the brain surface undergoes a repetitive motion due to the cardiac pulsation. This motion as well as potential video camera motion prevent accurate video analysis. We propose a dedicated motion model where the brain deformation is described using a linear basis learned from a few initial frames of the video. As opposed to other works using linear basis for the flow, the camera motion is explicitly accounted in the transformation model. Despite the nonlinear nature of our model, all the motion parameters are robustly estimated all at once, using only one singular value decomposition (SVD), making our procedure computationally efficient. A Lagrangian specification of the flow field ensures the stability of the method. Experiments on in vivo data are presented to evaluate the capacity of the method to cope with occlusion or camera motion. The method we propose satisfies the intraoperative constraints: it is robust to surgical tools occlusions, it works in real time, and it is able to handle large camera viewpoint changes.

Keywords: Brain surgery; Extended direct linear transform; Image registration; Motion compensation; Real time video processing; Subspace learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Brain / diagnostic imaging*
  • Brain / surgery*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Motion
  • Neurosurgical Procedures*
  • Video Recording*