A deep generative model of 3D single-cell organization

PLoS Comput Biol. 2022 Jan 18;18(1):e1009155. doi: 10.1371/journal.pcbi.1009155. eCollection 2022 Jan.

Abstract

We introduce a framework for end-to-end integrative modeling of 3D single-cell multi-channel fluorescent image data of diverse subcellular structures. We employ stacked conditional β-variational autoencoders to first learn a latent representation of cell morphology, and then learn a latent representation of subcellular structure localization which is conditioned on the learned cell morphology. Our model is flexible and can be trained on images of arbitrary subcellular structures and at varying degrees of sparsity and reconstruction fidelity. We train our full model on 3D cell image data and explore design trade-offs in the 2D setting. Once trained, our model can be used to predict plausible locations of structures in cells where these structures were not imaged. The trained model can also be used to quantify the variation in the location of subcellular structures by generating plausible instantiations of each structure in arbitrary cell geometries. We apply our trained model to a small drug perturbation screen to demonstrate its applicability to new data. We show how the latent representations of drugged cells differ from unperturbed cells as expected by on-target effects of the drugs.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Cell Nucleus / physiology*
  • Cell Shape / physiology*
  • Cells, Cultured
  • Computational Biology
  • Humans
  • Imaging, Three-Dimensional
  • Induced Pluripotent Stem Cells / cytology*
  • Intracellular Space* / chemistry
  • Intracellular Space* / metabolism
  • Intracellular Space* / physiology
  • Microscopy, Fluorescence
  • Models, Biological*
  • Single-Cell Analysis

Grants and funding

The study was supported by the Allen Institute. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.