Reframing control methods for parameters optimization in adversarial image generation

Neural Netw. 2022 Sep:153:303-313. doi: 10.1016/j.neunet.2022.06.015. Epub 2022 Jun 16.

Abstract

Training procedures for deep networks require the setting of several hyper-parameters that strongly affect the obtained results. The problem is even worse in adversarial learning strategies used for image generation where a proper balancing of the discriminative and generative networks is fundamental for an effective training. In this work we propose a novel hyper-parameters optimization strategy based on the use of Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) controllers. Both open loop and closed loop schemes for the tuning of a single parameter or of multiple parameters together are proposed allowing an efficient parameter tuning without resorting to computationally demanding trial-and-error schemes. We applied the proposed strategies to the widely used BEGAN and CycleGAN models: They allowed to achieve a more stable training that converges faster. The obtained images are also sharper with a slightly better quality both visually and according to the FID and FCN metrics. Image translation results also showed better background preservation and less color artifacts with respect to CycleGAN.

Keywords: Control methods; GAN; Hyper-parameters optimization.

MeSH terms

  • Artifacts*
  • Image Processing, Computer-Assisted* / methods