The Entropy Gain of Linear Systems and Some of Its Implications

Entropy (Basel). 2021 Jul 24;23(8):947. doi: 10.3390/e23080947.

Abstract

We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in Shannon's seminal paper that this entropy gain, G(G), equals the integral of log|G(ejω)|. In this note, we first show that Shannon's Theorem 14 does not hold in general. Then, we prove that, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals G(G). We show that the entropy gain between equal-length input and output sequences is upper bounded by G(G) and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Unlike what happens with linear maps, the entropy gain in this case depends on the distribution of all the signals involved. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: conditions for equality in an information inequality of importance in networked control problems; extending to a much broader class of sources the existing results on the rate-distortion function for non-stationary Gaussian sources, and an observation on the capacity of auto-regressive Gaussian channels with feedback.

Keywords: differential entropy rate; entropy loss in linear filters; networked control; non-minimum phase linear time-invariant systems; rate-distortion for non-stationary sources feedback capacity.