Coevolutionary networks of reinforcement-learning agents

Phys Rev E Stat Nonlin Soft Matter Phys. 2013 Jul;88(1):012815. doi: 10.1103/PhysRevE.88.012815. Epub 2013 Jul 24.

Abstract

This paper presents a model of network formation in repeated games where the players adapt their strategies and network ties simultaneously using a simple reinforcement-learning scheme. It is demonstrated that the coevolutionary dynamics of such systems can be described via coupled replicator equations. We provide a comprehensive analysis for three-player two-action games, which is the minimum system size with nontrivial structural dynamics. In particular, we characterize the Nash equilibria (NE) in such games and examine the local stability of the rest points corresponding to those equilibria. We also study general n-player networks via both simulations and analytical methods and find that, in the absence of exploration, the stable equilibria consist of star motifs as the main building blocks of the network. Furthermore, in all stable equilibria the agents play pure strategies, even when the game allows mixed NE. Finally, we study the impact of exploration on learning outcomes and observe that there is a critical exploration rate above which the symmetric and uniformly connected network topology becomes stable.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Game Theory*
  • Humans
  • Models, Theoretical
  • Reinforcement, Psychology