Systematic control of collective variables learned from variational autoencoders

J Chem Phys. 2022 Sep 7;157(9):094116. doi: 10.1063/5.0105120.

Abstract

Variational autoencoders (VAEs) are rapidly gaining popularity within molecular simulation for discovering low-dimensional, or latent, representations, which are critical for both analyzing and accelerating simulations. However, it remains unclear how the information a VAE learns is connected to its probabilistic structure and, in turn, its loss function. Previous studies have focused on feature engineering, ad hoc modifications to loss functions, or adjustment of the prior to enforce desirable latent space properties. By applying effectively arbitrarily flexible priors via normalizing flows, we focus instead on how adjusting the structure of the decoding model impacts the learned latent coordinate. We systematically adjust the power and flexibility of the decoding distribution, observing that this has a significant impact on the structure of the latent space as measured by a suite of metrics developed in this work. By also varying weights on separate terms within each VAE loss function, we show that the level of detail encoded can be further tuned. This provides practical guidance for utilizing VAEs to extract varying resolutions of low-dimensional information from molecular dynamics and Monte Carlo simulations.