Invertible Residual Blocks in Deep Learning Networks

IEEE Trans Neural Netw Learn Syst. 2023 Jan 31:PP. doi: 10.1109/TNNLS.2023.3238397. Online ahead of print.

Abstract

Residual blocks have been widely used in deep learning networks. However, information may be lost in residual blocks due to the relinquishment of information in rectifier linear units (ReLUs). To address this issue, invertible residual networks have been proposed recently but are generally under strict restrictions which limit their applications. In this brief, we investigate the conditions under which a residual block is invertible. A sufficient and necessary condition is presented for the invertibility of residual blocks with one layer of ReLU inside the block. In particular, for widely used residual blocks with convolutions, we show that such residual blocks are invertible under weak conditions if the convolution is implemented with certain zero-padding methods. Inverse algorithms are also proposed, and experiments are conducted to show the effectiveness of the proposed inverse algorithms and prove the correctness of the theoretical results.