Improving unsupervised stain-to-stain translation using self-supervision and meta-learning

J Pathol Inform. 2022 Jun 20:13:100107. doi: 10.1016/j.jpi.2022.100107. eCollection 2022.

Abstract

Background: In digital pathology, many image analysis tasks are challenged by the need for large and time-consuming manual data annotations to cope with various sources of variability in the image domain. Unsupervised domain adaptation based on image-to-image translation is gaining importance in this field by addressing variabilities without the manual overhead. Here, we tackle the variation of different histological stains by unsupervised stain-to-stain translation to enable a stain-independent applicability of a deep learning segmentation model.

Methods: We use CycleGANs for stain-to-stain translation in kidney histopathology, and propose two novel approaches to improve translational effectivity. First, we integrate a prior segmentation network into the CycleGAN for a self-supervised, application-oriented optimization of translation through semantic guidance, and second, we incorporate extra channels to the translation output to implicitly separate artificial meta-information otherwise encoded for tackling underdetermined reconstructions.

Results: The latter showed partially superior performances to the unmodified CycleGAN, but the former performed best in all stains providing instance-level Dice scores ranging between 78% and 92% for most kidney structures, such as glomeruli, tubules, and veins. However, CycleGANs showed only limited performance in the translation of other structures, e.g. arteries. Our study also found somewhat lower performance for all structures in all stains when compared to segmentation in the original stain.

Conclusions: Our study suggests that with current unsupervised technologies, it seems unlikely to produce "generally" applicable simulated stains.

Keywords: Deep learning; Digital pathology; Domain translation; Kidney; Segmentation; Stain-to-stain translation.