Toward a Controllable Disentanglement Network

IEEE Trans Cybern. 2022 Apr;52(4):2491-2504. doi: 10.1109/TCYB.2020.3000480. Epub 2022 Apr 5.

Abstract

This article addresses two crucial problems of learning disentangled image representations, namely, controlling the degree of disentanglement during image editing, and balancing the disentanglement strength and the reconstruction quality. To encourage disentanglement, we devise distance covariance-based decorrelation regularization. Further, for the reconstruction step, our model leverages a soft target representation combined with the latent image code. By exploring the real-valued space of the soft target representation, we are able to synthesize novel images with the designated properties. To improve the perceptual quality of images generated by autoencoder (AE)-based models, we extend the encoder-decoder architecture with the generative adversarial network (GAN) by collapsing the AE decoder and the GAN generator into one. We also design a classification-based protocol to quantitatively evaluate the disentanglement strength of our model. The experimental results showcase the benefits of the proposed model.