Membership inference attack on differentially private block coordinate descent

PeerJ Comput Sci. 2023 Oct 5:9:e1616. doi: 10.7717/peerj-cs.1616. eCollection 2023.

Abstract

The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model's training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.

Keywords: Differential privacy; Differentially private block coordinate descent; Membership inference attack; Privacy-preserving deep learning.

Grants and funding

This work was supported by the National Key Research and Development Program of China (No. 2020YFB1005804), National Natural Science Foundation of China (Nos. 62372121, 61632009), the High-Level Talents Program of Higher Education in Guangdong Province (No. 2016ZJ01), and the HEC, Faculty Development Program, Pakistan. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.