DeephESC 2.0: Deep Generative Multi Adversarial Networks for improving the classification of hESC

PLoS One. 2019 Mar 6;14(3):e0212849. doi: 10.1371/journal.pone.0212849. eCollection 2019.

Abstract

Human embryonic stem cells (hESC), derived from the blastocysts, provide unique cellular models for numerous potential applications. They have great promise in the treatment of diseases such as Parkinson's, Huntington's, diabetes mellitus, etc. hESC are a reliable developmental model for early embryonic growth because of their ability to divide indefinitely (pluripotency), and differentiate, or functionally change, into any adult cell type. Their adaptation to toxicological studies is particularly attractive as pluripotent stem cells can be used to model various stages of prenatal development. Automated detection and classification of human embryonic stem cell in videos is of great interest among biologists for quantified analysis of various states of hESC in experimental work. Currently video annotation is done by hand, a process which is very time consuming and exhaustive. To solve this problem, this paper introduces DeephESC 2.0 an automated machine learning approach consisting of two parts: (a) Generative Multi Adversarial Networks (GMAN) for generating synthetic images of hESC, (b) a hierarchical classification system consisting of Convolution Neural Networks (CNN) and Triplet CNNs to classify phase contrast hESC images into six different classes namely: Cell clusters, Debris, Unattached cells, Attached cells, Dynamically Blebbing cells and Apoptically Blebbing cells. The approach is totally non-invasive and does not require any chemical or staining of hESC. DeephESC 2.0 is able to classify hESC images with an accuracy of 93.23% out performing state-of-the-art approaches by at least 20%. Furthermore, DeephESC 2.0 is able to generate large number of synthetic images which can be used for augmenting the dataset. Experimental results show that training DeephESC 2.0 exclusively on a large amount of synthetic images helps to improve the performance of the classifier on original images from 93.23% to 94.46%. This paper also evaluates the quality of the generated synthetic images using the Structural SIMilarity (SSIM) index, Peak Signal to Noise ratio (PSNR) and statistical p-value metrics and compares them with state-of-the-art approaches for generating synthetic images. DeephESC 2.0 saves hundreds of hours of manual labor which would otherwise be spent on manually/semi-manually annotating more and more videos.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Cells, Cultured
  • Human Embryonic Stem Cells / classification*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Intravital Microscopy
  • Machine Learning*
  • Neural Networks, Computer
  • Signal-To-Noise Ratio
  • Video Recording*

Grants and funding

This work was supported in part by NSF grant 1552454, ONR grant N00014-12-1-1026 and Bourns Endowment funds. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.