Dynamic Event-Triggering Neural Learning Control for Partially Unknown Nonlinear Systems

IEEE Trans Cybern. 2022 Apr;52(4):2200-2213. doi: 10.1109/TCYB.2020.3004493. Epub 2022 Apr 5.

Abstract

This article presents an event-sampled integral reinforcement learning algorithm for partially unknown nonlinear systems using a novel dynamic event-triggering strategy. This is a novel attempt to introduce the dynamic triggering into the adaptive learning process. The core of this algorithm is the policy iteration technique, which is implemented by two neural networks. A critic network is periodically tuned using the integral reinforcement signal, and an actor network adopts the event-based communication to update the control policy only at triggering instants. For overcoming the deficiency of static triggering, a dynamic triggering rule is proposed to determine the occurrence of events, in which an internal dynamic variable characterized by a first-order filter is defined. Theoretical results indicate that the impulsive system driven by events is asymptotically stable, the network weight is convergent, and the Zeno behavior is successfully avoided. Finally, three examples are provided to demonstrate that the proposed dynamic triggering algorithm can reduce samples and transmissions even more, with guaranteed learning performance.

MeSH terms

  • Algorithms
  • Communication
  • Feedback
  • Neural Networks, Computer*
  • Nonlinear Dynamics*