Dynamical Neural Networks that Ensure Exponential Identification Error Convergence

Neural Netw. 1997 Mar;10(2):299-314. doi: 10.1016/s0893-6080(96)00060-3.

Abstract

Classical adaptive and robust adaptive schemes, are unable to ensure convergence of the identification error to zero, in the case of modeling errors. Therefore, the usage of such schemes to "black-box" identification of nonlinear systems ensures-in the best case-bounded identification error. In this paper, new learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is initially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examined; it is shown to be similar to that of classical adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.