A Differential Privacy Strategy Based on Local Features of Non-Gaussian Noise in Federated Learning

Sensors (Basel). 2022 Mar 22;22(7):2424. doi: 10.3390/s22072424.

Abstract

As an emerging artificial intelligence technology, federated learning plays a significant role in privacy preservation in machine learning, although its main objective is to prevent peers from peeping data. However, attackers from the outside can steal metadata in transit and through data reconstruction or other techniques to obtain the original data, which poses a great threat to the security of the federated learning system. In this paper, we propose a differential privacy strategy including encryption and decryption methods based on local features of non-Gaussian noise, which aggregates the noisy metadata through a sequential Kalman filter in federated learning scenarios to increase the reliability of the federated learning method. We name the local features of non-Gaussian noise as the non-Gaussian noise fragments. Compared with the traditional methods, the proposed method shows stronger security performance for two reasons. Firstly, non-Gaussian noise fragments contain more complex statistics, making them more difficult for attackers to identify. Secondly, in order to obtain accurate statistical features, attackers must aggregate all of the noise fragments, which is very difficult due to the increasing number of clients. We conduct experiments that demonstrate that the proposed method can greatly enhanced the system's security.

Keywords: Kalman filter; differential privacy; federated learning (FL); non-Gaussian noise.

MeSH terms

  • Artificial Intelligence*
  • Humans
  • Machine Learning
  • Privacy*
  • Reproducibility of Results
  • Research Design