Using Simulated Training Data of Voxel-Level Generative Models to Improve 3D Neuron Reconstruction

IEEE Trans Med Imaging. 2022 Dec;41(12):3624-3635. doi: 10.1109/TMI.2022.3191011. Epub 2022 Dec 2.

Abstract

Reconstructing neuron morphologies from fluorescence microscope images plays a critical role in neuroscience studies. It relies on image segmentation to produce initial masks either for further processing or final results to represent neuronal morphologies. This has been a challenging step due to the variation and complexity of noisy intensity patterns in neuron images acquired from microscopes. Whereas progresses in deep learning have brought the goal of accurate segmentation much closer to reality, creating training data for producing powerful neural networks is often laborious. To overcome the difficulty of obtaining a vast number of annotated data, we propose a novel strategy of using two-stage generative models to simulate training data with voxel-level labels. Trained upon unlabeled data by optimizing a novel objective function of preserving predefined labels, the models are able to synthesize realistic 3D images with underlying voxel labels. We showed that these synthetic images could train segmentation networks to obtain even better performance than manually labeled data. To demonstrate an immediate impact of our work, we further showed that segmentation results produced by networks trained upon synthetic data could be used to improve existing neuron reconstruction methods.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Image Processing, Computer-Assisted / methods
  • Imaging, Three-Dimensional* / methods
  • Microscopy
  • Neural Networks, Computer*
  • Neurons