New RNN Algorithms for Different Time-Variant Matrix Inequalities Solving Under Discrete-Time Framework

IEEE Trans Neural Netw Learn Syst. 2024 Apr 16:PP. doi: 10.1109/TNNLS.2024.3382199. Online ahead of print.

Abstract

A series of discrete time-variant matrix inequalities is generally regarded as one of the challenging problems in science and engineering fields. As a discrete time-variant problem, the existing solving schemes generally need the theoretical support under the continuous-time framework, and there is no independent solving scheme under the discrete-time framework. The theoretical deficiency of solving scheme greatly limits the theoretical research and practical application of discrete time-variant matrix inequalities. In this article, new discrete-time recurrent neural network (RNN) algorithms are proposed, analyzed, and investigated for solving different time-variant matrix inequalities under the discrete-time framework, including discrete time-variant matrix vector inequality (discrete time-variant MVI), discrete time-variant generalized matrix inequality (discrete time-variant GMI), discrete time-variant generalized-Sylvester matrix inequality (discrete time-variant GSMI), and discrete time-variant complicated-Sylvester matrix inequality (discrete time-variant CSMI), and all solving processes are based on the direct discretization thought. Specifically, first of all, four discrete time-variant matrix inequalities are presented as the target problems of these researches. Second, for solving such problems, we propose corresponding discrete-time recurrent neural network (RNN) (DT-RNN) algorithms (termed DT-RNN-MVI algorithm, DT-RNN-GMI algorithm, DT-RNN-GSMI algorithm, and DT-RNN-CSMI algorithm), which are different from the traditional DT-RNN design thought because second-order Taylor expansion is applied to derive the DT-RNN algorithms. This creative process avoids the intervention of continuous-time framework. Then, theoretical analyses are presented, which show the convergence and precision of the DT-RNN algorithms. Abundant numerical experiments are further carried out, which further confirm the excellent properties of the DT-RNN algorithms.