Using machine learning for chemical-free histological tissue staining

J Histotechnol. 2024 Apr 22:1-4. doi: 10.1080/01478885.2024.2338585. Online ahead of print.

Abstract

Hematoxylin and eosin staining can be hazardous, expensive, and prone to error and variability. To circumvent these issues, artificial intelligence/machine learning models such as generative adversarial networks (GANs), are being used to 'virtually' stain unstained tissue images indistinguishable from chemically stained tissue. Frameworks such as deep convolutional GANs (DCGAN) and conditional GANs (CGANs) have successfully generated highly reproducible 'stained' images. However, their utility may be limited by requiring registered, paired images which can be difficult to obtain. To avoid these dataset requirements, we attempted to use an unsupervised CycleGAN pix2pix model(5,6) to turn unpaired, unstained bright-field images into pathologist-approved digitally 'stained' images. Using formalin-fixed-paraffin-embedded liver samples, 5µm section images (20x) were obtained before and after staining to create "stained" an "unstained" datasets. Model implementation was conducted using Ubuntu 20.04.4 LTS, 32 GB RAM, Intel Core i7-9750 CPU @2.6 GHz, Nvidia GeForce RTX 2070 Mobile, Python 3.7.11 and Tensorflow 2.9.1. The CycleGAN framework utilized a u-net-based generator and discriminator from pix2pix, a CGAN. The CycleGAN used a modified loss function, cycle consistent loss that assumed unpaired images, so loss was measured twice. To our knowledge, this is the first documented application of this architecture using unpaired bright-field images. Results and suggested improvements are discussed.

Keywords: Artificial intelligence/machine learning; CycleGAN; H&E; chemical-free histology; digital; pix2pix; unpaired images; virtual staining.