Analysis of the transferability and robustness of GANs evolved for Pareto set approximations

Neural Netw. 2020 Dec:132:281-296. doi: 10.1016/j.neunet.2020.09.003. Epub 2020 Sep 16.

Abstract

The generative adversarial network (GAN) is a good example of a strong-performing, neural network-based generative model, even though it does have some drawbacks of its own. Mode collapsing and the difficulty in finding the optimal network structure are two of the most concerning issues. In this paper, we address these two issues at the same time by proposing a neuro-evolutionary approach with an agile evaluation method for the fast evolution of robust deep architectures that avoid mode collapsing. The computation of Pareto set approximations with GANs is chosen as a suitable benchmark to evaluate the quality of our approach. Furthermore, we demonstrate the consistency, scalability, and generalization capabilities of the proposed method, which shows its potential applications to many areas. We finally readdress the issue of designing this kind of models by analyzing the characteristics of the best performing GAN specifications, and conclude with a set of general guidelines. This results in a reduction of the many-dimensional problem of structural manual design or automated search.

Keywords: Generative adversarial networks; Knowledge transferability; Multi-objective optimization; Neuro-evolution; Pareto front approximation.

MeSH terms

  • Deep Learning*
  • Neural Networks, Computer*