Multimodal Stereotactic Brain Tumor Segmentation Using 3D-Znet

Bioengineering (Basel). 2023 May 11;10(5):581. doi: 10.3390/bioengineering10050581.

Abstract

Stereotactic brain tumor segmentation based on 3D neuroimaging data is a challenging task due to the complexity of the brain architecture, extreme heterogeneity of tumor malformations, and the extreme variability of intensity signal and noise distributions. Early tumor diagnosis can help medical professionals to select optimal medical treatment plans that can potentially save lives. Artificial intelligence (AI) has previously been used for automated tumor diagnostics and segmentation models. However, the model development, validation, and reproducibility processes are challenging. Often, cumulative efforts are required to produce a fully automated and reliable computer-aided diagnostic system for tumor segmentation. This study proposes an enhanced deep neural network approach, the 3D-Znet model, based on the variational autoencoder-autodecoder Znet method, for segmenting 3D MR (magnetic resonance) volumes. The 3D-Znet artificial neural network architecture relies on fully dense connections to enable the reuse of features on multiple levels to improve model performance. It consists of four encoders and four decoders along with the initial input and the final output blocks. Encoder-decoder blocks in the network include double convolutional 3D layers, 3D batch normalization, and an activation function. These are followed by size normalization between inputs and outputs and network concatenation across the encoding and decoding branches. The proposed deep convolutional neural network model was trained and validated using a multimodal stereotactic neuroimaging dataset (BraTS2020) that includes multimodal tumor masks. Evaluation of the pretrained model resulted in the following dice coefficient scores: Whole Tumor (WT) = 0.91, Tumor Core (TC) = 0.85, and Enhanced Tumor (ET) = 0.86. The performance of the proposed 3D-Znet method is comparable to other state-of-the-art methods. Our protocol demonstrates the importance of data augmentation to avoid overfitting and enhance model performance.

Keywords: 3D tumor segmentation; Znet; deep learning; encoder–decoder; multimodal neuroimaging data.

Grants and funding

This study was supported in part by NSF grants 1916425, 1734853, 1636840, NIH grants UL1 TR002240, R01 CA233487, R01 MH121079, R01 MH126137, and T32 GM141746. The funding agencies played no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Many colleagues at the University of Michigan Statistics Online Computational Resource (SOCR) contributed ideas, infrastructure, and support.