Generative Image Reconstruction From Gradients

IEEE Trans Neural Netw Learn Syst. 2024 Apr 10:PP. doi: 10.1109/TNNLS.2024.3383722. Online ahead of print.

Abstract

In this article, we propose a method, generative image reconstruction from gradients (GIRG), for recovering training images from gradients in a federated learning (FL) setting, where privacy is preserved by sharing model weights and gradients rather than raw training data. Previous studies have shown the potential for revealing clients' private information or even pixel-level recovery of training images from shared gradients. However, existing methods are limited to low-resolution images and small batch sizes (BSs) or require prior knowledge about the client data. GIRG utilizes a conditional generative model to reconstruct training images and their corresponding labels from the shared gradients. Unlike previous generative model-based methods, GIRG does not require prior knowledge of the training data. Furthermore, GIRG optimizes the weights of the conditional generative model to generate highly accurate "dummy" images instead of optimizing the input vectors of the generative model. Comprehensive empirical results show that GIRG is able to recover high-resolution images with large BSs and can even recover images from the aggregation of gradients from multiple participants. These results reveal the vulnerability of current FL practices and call for immediate efforts to prevent inversion attacks in gradient-sharing-based collaborative training.