[Implement of mixed reality navigation based on multimodal imaging in the resection of intracranial eloquent lesions]

Zhonghua Wai Ke Za Zhi. 2022 Dec 1;60(12):1100-1107. doi: 10.3760/cma.j.cn112139-20220531-00248.
[Article in Chinese]

Abstract

Objective: To examine the clinical feasibility of mixed reality navigation (MRN) technology based on multimodal imaging for the resection of intracranial eloquent lesions. Methods: Fifteen patients with intracranial eloquent lesions admitted to the Department of Neurosurgery, the First Medical Center, People's Liberation Army General Hospital from September 2020 to September 2021 were retrospectively enrolled. There were 7 males and 8 females, aged (50±16) years (range: 16 to 70 years). Postoperative pathological diagnosis included meningioma (n=7), metastatic carcinoma (n=3), cavernous hemangioma, glioma, ependymoma, aneurysmal changes and lymphoma (n=1, respectively). The open-source software was used to perform the three-dimensional visualization of preoperative images, and the self-developed MRN system was used to perform the fusion and interaction of multimodal images, so as to formulate the surgical plan and avoid damaging the eloquent white matter fiber tracts. Traditional navigation, intraoperative ultrasound and fluorescein sodium angiography were used to determine the extent of lesion resection. The intraoperative conditions of MRN-assisted surgery were analyzed, and the setup time and localization error of MRN system were measured. The changes of postoperative neurological function were recorded. Results: MRN based on multimodal imaging was achieved in all patients. The MRN system setup time (M(IQR)) was 36 (12) minutes (range: 20 to 44 minutes), and the localization error was 3.2 (2.0) mm (range: 2.6 to 6.7 mm). The reliability of eloquent white matter fiber tracts localization based on MRN was rated as "excellent" in 11 cases, "medium" in 3 cases, and "poor" in 1 case. There were no perioperative death and no new impairment in motor, language, or visual functions after operation. Transient limb numbness occurred in 1 patient after operation, and recovered to the preoperative state in 2 weeks after operation. Conclusion: The MRN system based on multimodal imaging can improve the surgical accuracy and safety, and reduce the incidence of iatrogenic neurological dysfunction.

目的: 探讨基于多模态影像的混合现实导航(MRN)技术用于脑功能区病变切除术的临床可行性。 方法: 回顾性收集2020年9月至2021年9月解放军总医院第一医学中心神经外科收治的15例脑功能区病变患者的资料,男性7例,女性8例,年龄(50±16)岁(范围:16~70岁);术后病理学诊断为脑膜瘤7例,转移癌3例,海绵状血管瘤、胶质瘤、室管膜瘤、动脉瘤样改变及淋巴瘤各1例。术前通过开源软件构建术前影像的三维可视化图像,采用自主研发的MRN系统进行多模态图像的融合与交互,制定手术计划,避免损伤病变周围重要白质纤维束。结合传统导航、术中超声及荧光素钠造影确定病变切除范围。分析MRN辅助手术的术中情况,并测量MRN系统配置用时和定位误差,记录患者术后神经功能变化情况。 结果: 15例患者均实现基于多模态影像的MRN。MRN系统配置耗时[M(IQR)]36(12)min(范围:20~44 min),定位误差为3.2(2.0)mm(范围:2.6~6.7 mm)。11例患者重要白质纤维束定位的可靠性分级为“优”,3例为“中”,1例为“差”。无围手术期死亡病例,无运动、语言或视觉功能术后新发损害。1例患者术后出现一过性肢体麻木,2周内恢复至术前状态。 结论: 基于多模态影像的MRN系统用于脑功能区病变切除术,可提高手术的精准性和安全性,减少医源性神经功能损害的发生。.

Publication types

  • English Abstract

MeSH terms

  • Augmented Reality*
  • Humans
  • Multimodal Imaging
  • Reproducibility of Results
  • Retrospective Studies