Q-Learning for Feedback Nash Strategy of Finite-Horizon Nonzero-Sum Difference Games

IEEE Trans Cybern. 2022 Sep;52(9):9170-9178. doi: 10.1109/TCYB.2021.3052832. Epub 2022 Aug 18.

Abstract

In this article, we study the feedback Nash strategy of the model-free nonzero-sum difference game. The main contribution is to present the Q -learning algorithm for the linear quadratic game without prior knowledge of the system model. It is noted that the studied game is in finite horizon which is novel to the learning algorithms in the literature which are mostly for the infinite-horizon Nash strategy. The key is to characterize the Q -factors in terms of the arbitrary control input and state information. A numerical example is given to verify the effectiveness of the proposed algorithm.

MeSH terms

  • Algorithms*
  • Feedback