Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to L1-minimization

Neural Netw. 2024 Apr:172:106123. doi: 10.1016/j.neunet.2024.106123. Epub 2024 Jan 10.

Abstract

This paper develops two continuous-time distributed accelerated neurodynamic approaches for solving sparse recovery via smooth approximation to L1-norm minimization problem. First, the L1-norm minimization problem is converted into a distributed smooth optimization problem by utilizing multiagent consensus theory and smooth approximation. Then, a distributed primal-dual accelerated neurodynamic approach is designed by using Karush-Kuhn-Tucker (KKT) condition and Nesterov's accelerated method. Furthermore, in order to reduce the structure complexity of the presented neurodynamic approach, based on the projection matrix, we eliminate a dual variable in the KKT condition and propose a distributed accelerated neurodynamic approach with a simpler structure. It is proved that the two proposed distributed neurodynamic approaches both achieve O(1t2) convergence rate. Finally, the simulation results of sparse recovery are given to demonstrate the effectiveness of the proposed approaches.

Keywords: Distributed neurodynamic approaches; Nesterov’s accelerated method; Smooth approximation; Sparse recovery.

MeSH terms

  • Computer Simulation
  • Consensus
  • Neural Networks, Computer*