Consensus-Based Cooperative Algorithms for Training Over Distributed Data Sets Using Stochastic Gradients

IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5579-5589. doi: 10.1109/TNNLS.2021.3071058. Epub 2022 Oct 5.

Abstract

In this article, distributed algorithms are proposed for training a group of neural networks with private data sets. Stochastic gradients are utilized in order to eliminate the requirement for true gradients. To obtain a universal model of the distributed neural networks trained using local data sets only, consensus tools are introduced to derive the model toward the optimum. Most of the existing works employ diminishing learning rates, which are often slow and impracticable for online learning, while constant learning rates are studied in some recent works, but the principle for choosing the rates is not well established. In this article, constant learning rates are adopted to empower the proposed algorithms with tracking ability. Under mild conditions, the convergence of the proposed algorithms is established by exploring the error dynamics of the connected agents, which provides an upper bound for selecting the constant learning rates. Performances of the proposed algorithms are analyzed with and without gradient noises, in the sense of mean square error (MSE). It is proved that the MSE converges with bounded errors determined by the gradient noises, and the MSE converges to zero if the gradient noises are absent. Simulation results are provided to validate the effectiveness of the proposed algorithms.