Development and validation of a multi-modality fusion deep learning model for differentiating glioblastoma from solitary brain metastases

Zhong Nan Da Xue Xue Bao Yi Xue Ban. 2024 Jan 28;49(1):58-67. doi: 10.11817/j.issn.1672-7347.2024.230248.
[Article in English, Chinese]

Abstract

Objectives: Glioblastoma (GBM) and brain metastases (BMs) are the two most common malignant brain tumors in adults. Magnetic resonance imaging (MRI) is a commonly used method for screening and evaluating the prognosis of brain tumors, but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited. In recent years, deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system. This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases (SBMs), and to further explore the impact of multimodality data fusion on classification tasks.

Methods: Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed. First, structural T1-weight, T1C-weight, and T2-weight were selected as 3 inputs to the entire model, regions of interest (ROIs) were manually delineated on the registered three modal MR images, and multimodality radiomics features were obtained, dimensions were reduced using a random forest (RF)-based feature selection method, and the importance of each feature was further analyzed. Secondly, we used the method of contrast disentangled to find the shared features and complementary features between different modal features. Finally, the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities.

Results: The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs. Furthermore, compared with single-modal data, the multimodal fusion models using machine learning algorithms such as support vector machine (SVM), Logistic regression, RF, adaptive boosting (AdaBoost), and gradient boosting decision tree (GBDT) achieved significant improvements, with area under the curve (AUC) values of 0.974, 0.978, 0.943, 0.938, and 0.947, respectively; our comparative disentangled multi-modal MR fusion method performs well, and the results of AUC, accuracy (ACC), sensitivity (SEN) and specificity(SPE) in the test set were 0.985, 0.984, 0.900, and 0.990, respectively. Compared with other multi-modal fusion methods, AUC, ACC, and SEN in this study all achieved the best performance. In the ablation experiment to verify the effects of each module component in this study, AUC, ACC, and SEN increased by 1.6%, 10.9% and 15.0%, respectively after 3 loss functions were used simultaneously.

Conclusions: A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.

目的: 胶质母细胞瘤(glioblastoma,GBM)和脑转移瘤(brain metastases,BMs)是成人中常见的恶性脑肿瘤,目前磁共振成像(magnetic resonance imaging,MRI)是筛查和评估脑肿瘤预后的常用方法,但其鉴别诊断GBM和BMs的特异性和敏感性有限。近年来,深度神经网络在诊断分类和创建临床决策支持系统方面显示出极大的潜力。本研究旨在应用深度学习技术提取的放射组学特征,探讨其在初诊GBM和单发性脑转移瘤(solitary brain metastases,SBMs)术前准确分类中的可行性,进一步探索基于多模态数据融合对分类任务的影响。方法: 回顾性分析经组织病理或临床诊断证实的135例初诊GBM患者和73例SBMs患者的头颅MRI序列数据。首先,选择结构性T1加权、T1C加权和T2加权作为整个模型的3个输入,在配准后的3种模态MR图像上人工勾画感兴趣区域(regions of interest,ROI),并获取多模态放射组学特征,使用基于随机森林(random forest,RF)的特征选择方法降低维度,进一步分析每个特征的重要性。然后,通过对比解纠缠的方法寻找不同模态特征之间的共享特征和互补特征。最后,通过融合不同模态的2种特征,预测每个样本对GBM和SBMs的响应。结果: 应用机器学习和本文提出的多模态融合方法的放射组学特征对GBM和SBMs有较好的区分能力。相较于单模态数据,应用支持向量机(support vector machine,SVM)、Logistic回归、RF、自适应增强(adaptive boosting,AdaBoost)、梯度提升决策树(gradient boosting decision tree,GBDT)机器学习算法的多模态融合模型均取得了较大提升,曲线下面积(area under the curve,AUC)分别为0.974、0.978、0.943、0.938、0.947。本研究的对比解纠缠多模态MR融合方法表现更好,测试集上AUC、准确度(accuracy,ACC)、灵敏度(sensitivity,SEN)、特异度(specificity,SPE)分别为0.985、0.984、0.900、0.990。相较于其他多模态融合方法,本研究方法的AUC、ACC和SEN均呈现出最好的性能表现。验证本研究各模块组件作用的消融实验中,同时使用3种损失函数后,AUC、ACC和SEN分别提升了1.6%、10.9%和15.0%。结论: 基于深度学习的对比解纠缠多模态MR放射组学特征融合技术有助于提高GBM和SBMs的分类准确性。.

Keywords: deep learning; disentanglement; glioblastoma; multimodality data; solitary brain metastases.

MeSH terms

  • Adult
  • Algorithms
  • Brain Neoplasms* / diagnostic imaging
  • Deep Learning*
  • Glioblastoma* / diagnostic imaging
  • Humans
  • Retrospective Studies