A Unifying Generator Loss Function for Generative Adversarial Networks

Entropy (Basel). 2024 Mar 27;26(4):290. doi: 10.3390/e26040290.

Abstract

A unifying α-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN) that uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, Lα, and the resulting GAN system is termed Lα-GAN. Under an optimal discriminator, it is shown that the generator's optimization problem consists of minimizing a Jensen-fα-divergence, a natural generalization of the Jensen-Shannon divergence, where fα is a convex function expressed in terms of the loss function Lα. It is also demonstrated that this Lα-GAN problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, least squares GAN (LSGAN), least kth-order GAN (LkGAN), and the recently introduced (αD,αG)-GAN with αD=1. Finally, experimental results are provided for three datasets-MNIST, CIFAR-10, and Stacked MNIST-to illustrate the performance of various examples of the Lα-GAN system.

Keywords: Jensen-f-divergence; deep learning; f-divergence; generative adversarial networks; parameterized loss functions.