TURBO: The Swiss Knife of Auto-Encoders

Entropy (Basel). 2023 Oct 21;25(10):1471. doi: 10.3390/e25101471.

Abstract

We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.

Keywords: Kullback–Leibler divergence; TURBO; auto-encoder; generalisation; information bottleneck; lower bound; mutual information; physical latent space; representations; variational approximation.