Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6173-6185. doi: 10.1109/TNNLS.2021.3133836. Epub 2023 Sep 1.

Abstract

Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.