On a Finitely Activated Terminal RNN Approach to Time-Variant Problem Solving

IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7289-7302. doi: 10.1109/TNNLS.2021.3084740. Epub 2022 Nov 30.

Abstract

This article concerns with terminal recurrent neural network (RNN) models for time-variant computing, featuring finite-valued activation functions (AFs), and finite-time convergence of error variables. Terminal RNNs stand for specific models that admit terminal attractors, and the dynamics of each neuron retains finite-time convergence. The might-existing imperfection in solving time-variant problems, through theoretically examining the asymptotically convergent RNNs, is pointed out for which the finite-time-convergent models are most desirable. The existing AFs are summarized, and it is found that there is a lack of the AFs that take only finite values. A finitely valued terminal RNN, among others, is taken into account, which involves only basic algebraic operations and taking roots. The proposed terminal RNN model is used to solve the time-variant problems undertaken, including the time-variant quadratic programming and motion planning of redundant manipulators. The numerical results are presented to demonstrate effectiveness of the proposed neural network, by which the convergence rate is comparable with that of the existing power-rate RNN.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Neural Networks, Computer*
  • Neurons
  • Problem Solving*