Secure Aggregation Protocol Based on DC-Nets and Secret Sharing for Decentralized Federated Learning

Sensors (Basel). 2024 Feb 17;24(4):1299. doi: 10.3390/s24041299.

Abstract

In the era of big data, millions and millions of data are generated every second by different types of devices. Training machine-learning models with these data has become increasingly common. However, the data used for training are often sensitive and may contain information such as medical, banking, or consumer records, for example. These data can cause problems in people's lives if they are leaked and also incur sanctions for companies that leak personal information for any reason. In this context, Federated Learning emerges as a solution to the privacy of personal data. However, even when only the gradients of the local models are shared with the central server, some attacks can reconstruct user data, allowing a malicious server to violate the FL principle, which is to ensure the privacy of local data. We propose a secure aggregation protocol for Decentralized Federated Learning, which does not require a central server to orchestrate the aggregation process. To achieve this, we combined a Multi-Secret-Sharing scheme with a Dining Cryptographers Network. We validate the proposed protocol in simulations using the MNIST handwritten digits dataset. This protocol achieves results comparable to Federated Learning with the FedAvg protocol while adding a layer of privacy to the models. Furthermore, it obtains a timing performance that does not significantly affect the total training time, unlike protocols that use Homomorphic Encryption.

Keywords: DC-nets; decentralized federated learning; privacy; secret sharing; secure aggregation.

Grants and funding

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001 and The APC was funded by National Laboratory for Scientific Computing—Brasil (LNCC).