Novel projection neurodynamic approaches for constrained convex optimization

Neural Netw. 2022 Jun:150:336-349. doi: 10.1016/j.neunet.2022.03.011. Epub 2022 Mar 15.

Abstract

Consider that the constrained convex optimization problems have emerged in a variety of scientific and engineering applications that often require efficient and fast solutions. Inspired by the Nesterov's accelerated method for solving unconstrained convex and strongly convex optimization problems, in this paper we propose two novel accelerated projection neurodynamic approaches for constrained smooth convex and strongly convex optimization based on the variational approach. First, for smooth, and convex optimization problems, a non-autonomous accelerated projection neurodynamic approach (NAAPNA) is presented and the existence, uniqueness and feasibility of the solution to it are analyzed rigorously. We provide that the NAAPNA has a convergence rate which is inversely proportional to the square of the running time. In addition, we present a novel autonomous accelerated projection neurodynamic approach (AAPNA) for addressing the constrained, smooth, strongly convex optimization problems and prove the existence, uniqueness to the strong global solution of AAPNA based on the Cauchy-Lipschitz-Picard theorem. Furthermore, we also prove the global convergence of AAPNA with different exponential convergence rates for different parameters. Compared with existing projection neurodynamic approaches based on the Brouwer's fixed point theorem, both NAAPNA and AAPNA use the projection operators of the auxiliary variable to map the primal variables to the constrained feasible region, thus our proposed neurodynamic approaches are easier to realize algorithm's acceleration. Finally, the effectiveness of NAAPNA and AAPNA is illustrated with several numerical examples.

Keywords: Accelerated neurodynamic approaches; Arithmetical and exponential convergence rate; Constrained optimization; Variational method.

MeSH terms

  • Neural Networks, Computer*