Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition

Neural Netw. 2020 May:125:131-141. doi: 10.1016/j.neunet.2020.02.001. Epub 2020 Feb 11.

Abstract

In recent years, deep learning achieves remarkable results in the field of artificial intelligence. However, the training process of deep neural networks may cause the leakage of individual privacy. Given the model and some background information of the target individual, the adversary can maliciously infer the sensitive feature of the target individual. Therefore, it is imperative to preserve the sensitive information in the training data. Differential privacy is a state-of-the-art paradigm for providing the privacy guarantee of datasets, which protects the private and sensitive information from the attack of adversaries significantly. However, the existing privacy-preserving models based on differential privacy are less than satisfactory since traditional approaches always inject the same amount of noise into parameters to preserve the sensitive information, which may impact the trade-off between the model utility and the privacy guarantee of training data. In this paper, we present a general differentially private deep neural networks learning framework based on relevance analysis, which aims to bridge the gap between private and non-private models while providing an effective privacy guarantee of sensitive information. The proposed model perturbs gradients according to the relevance between neurons in different layers and the model output. Specifically, during the process of backward propagation, more noise is added to gradients of neurons that have less relevance to the model output, and vice-versa. Experiments on five real datasets demonstrate that our mechanism not only bridges the gap between private and non-private models, but also prevents the disclosure of sensitive information effectively.

Keywords: Deep neural networks; Differential privacy; Relevance analysis.

MeSH terms

  • Artificial Intelligence / trends
  • Deep Learning* / trends
  • Humans
  • Neural Networks, Computer*
  • Privacy*