Generative adversarial networks with decoder-encoder output noises

Neural Netw. 2020 Jul:127:19-28. doi: 10.1016/j.neunet.2020.04.005. Epub 2020 Apr 9.

Abstract

In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder-encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder-encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs.

Keywords: Generative adversarial networks; Generative models; Image generation; Noise; Variational autoencoders.

MeSH terms

  • Bayes Theorem
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Image Processing, Computer-Assisted / trends
  • Neural Networks, Computer*
  • Pattern Recognition, Automated / methods*
  • Pattern Recognition, Automated / trends
  • Random Allocation