A second-order accelerated neurodynamic approach for distributed convex optimization

Neural Netw. 2022 Feb:146:161-173. doi: 10.1016/j.neunet.2021.11.013. Epub 2021 Nov 16.

Abstract

Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.

Keywords: Convergence rate; Inertial systems; Second-order neurodynamic approach.

MeSH terms

  • Computer Simulation
  • Neural Networks, Computer*