Neurodynamic approaches for multi-agent distributed optimization

Neural Netw. 2024 Jan:169:673-684. doi: 10.1016/j.neunet.2023.11.025. Epub 2023 Nov 10.

Abstract

This paper considers a class of multi-agent distributed convex optimization with a common set of constraints and provides several continuous-time neurodynamic approaches. In problem transformation, l1 and l2 penalty methods are used respectively to cast the linear consensus constraint into the objective function, which avoids introducing auxiliary variables and only involves information exchange among primal variables in the process of solving the problem. For nonsmooth cost functions, two differential inclusions with projection operator are proposed. Without convexity of the differential inclusions, the asymptotic behavior and convergence properties are explored. For smooth cost functions, by harnessing the smoothness of l2 penalty function, finite- and fixed-time convergent algorithms are provided via a specifically designed average consensus estimator. Finally, several numerical examples in the multi-agent simulation environment are conducted to illustrate the effectiveness of the proposed neurodynamic approaches.

Keywords: Distributed optimization; Exponential convergence; Finite/fixed-time convergence; Multi-agent system.

MeSH terms

  • Algorithms*
  • Computer Simulation
  • Consensus
  • Neural Networks, Computer*