ControlVAE: Tuning, Analytical Properties, and Performance Analysis

IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9285-9297. doi: 10.1109/TPAMI.2021.3127323. Epub 2022 Nov 7.

Abstract

This paper reviews the novel concept of a controllable variational autoencoder (ControlVAE), discusses its parameter tuning to meet application needs, derives its key analytic properties, and offers useful extensions and applications. ControlVAE is a new variational autoencoder (VAE) framework that combines automatic control theory with the basic VAE to stabilize the KL-divergence of VAE models to a specified value. It leverages a non-linear PI controller, a variant of the proportional-integral-derivative (PID) controller, to dynamically tune the weight of the KL-divergence term in the evidence lower bound (ELBO) using the output KL-divergence as feedback. This allows us to precisely control the KL-divergence to a desired value (set point) that is effective in avoiding posterior collapse and learning disentangled representations. While prior work developed alternative techniques for controlling the KL divergence, we show that our PI controller has better stability properties and thus better convergence, thereby producing better disentangled representations from finite training data. In order to improve the ELBO of ControlVAE over that of the regular VAE, we provide a simplified theoretical analysis to inform the choice of set point for the KL-divergence of ControlVAE. We evaluate the proposed method on three tasks: image generation, language modeling, and disentangled representation learning. The results show that ControlVAE can achieve much better reconstruction quality than the other methods for comparable disentanglement. On the language modeling task, our method can avoid posterior collapse (KL vanishing) and improve the diversity of generated text. Moreover, it can change the optimization trajectory, improving the ELBO and the reconstruction quality for image generation.