Autosegmentation of brain metastases using 3D FCNN models and methods to manage GPU memory limitations

Biomed Phys Eng Express. 2022 Nov 4;8(6). doi: 10.1088/2057-1976/ac9b5b.

Abstract

Aims. To explore the efficacy of two different approaches to train a Fully Convolutional Neural Network (FCNN) with Graphical Processing Unit (GPU) memory limitations and investigate if pre-trained two-dimensional weights can be transferred into a three-dimensional model for the purpose of brain tumour segmentation.Materials and methods. Models were developed in Python using TensorFlow and Keras. T1 contrast-enhanced MRI scans and associated contouring data from 104 patients were used to train and validate the model. The data was resized to one-quarter of its original resolution, and the original data was also split into four quarters for comparison to fit within GPU limitations. Transferred weights from a two-dimensional VGG16 model trained on ImageNet were transformed into three dimensions for comparison with randomly generated initial weights.Results. Resizing the data produced superior dice similarity coefficients with fewer false positives than quartering the data. Quartering the data yielded a superior sensitivity. Transforming and transferring two-dimensional weights was not able to consistently produce improvement in training or final metrics.Conclusion. For segmentation of brain tumours, resizing the data results in better performance than quartering the data. For the model and approaches used in this report, transferring weights were not able to demonstrate any benefit.

Keywords: 3D; brain metastases; convolutional neural network; magnetic resonance imaging; memory limitation; segmentation; transfer learning.

MeSH terms

  • Brain Neoplasms* / diagnostic imaging
  • Humans
  • Image Processing, Computer-Assisted* / methods
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer