A learning-based method for online adjustment of C-arm Cone-beam CT source trajectories for artifact avoidance

Int J Comput Assist Radiol Surg. 2020 Nov;15(11):1787-1796. doi: 10.1007/s11548-020-02249-1. Epub 2020 Aug 25.

Abstract

Purpose: During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality.

Methods: We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e., verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index over all possible next views given the current X-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies.

Results: We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data as well as real CBCT acquisitions of a semianthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts.

Conclusion: The proposed method is a step toward online patient-specific C-arm CBCT source trajectories that enable high-quality tomographic imaging in the operating room. Since the optimization objective is implicitly encoded in a neural network trained on large amounts of well-annotated projection images, the proposed approach overcomes the need for 3D information at run-time.

Keywords: Deep learning; Image-guided surgery; Metal artifact reduction; Tomographic reconstruction.

MeSH terms

  • Artifacts
  • Cone-Beam Computed Tomography / methods*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer*
  • Pedicle Screws
  • Phantoms, Imaging
  • Spinal Fusion / methods*
  • Spine / diagnostic imaging
  • Spine / surgery*
  • Surgery, Computer-Assisted / methods*