Heuristically-accelerated multiagent reinforcement learning

IEEE Trans Cybern. 2014 Feb;44(2):252-65. doi: 10.1109/TCYB.2013.2253094.

Abstract

This paper presents a novel class of algorithms, called Heuristically-Accelerated Multiagent Reinforcement Learning (HAMRL), which allows the use of heuristics to speed up well-known multiagent reinforcement learning (RL) algorithms such as the Minimax-Q. Such HAMRL algorithms are characterized by a heuristic function, which suggests the selection of particular actions over others. This function represents an initial action selection policy, which can be handcrafted, extracted from previous experience in distinct domains, or learnt from observation. To validate the proposal, a thorough theoretical analysis proving the convergence of four algorithms from the HAMRL class (HAMMQ, HAMQ(λ), HAMQS, and HAMS) is presented. In addition, a comprehensive systematical evaluation was conducted in two distinct adversarial domains. The results show that even the most straightforward heuristics can produce virtually optimal action selection policies in much fewer episodes, significantly improving the performance of the HAMRL over vanilla RL algorithms.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Artificial Intelligence*
  • Biomimetics / methods*
  • Computer Simulation
  • Models, Theoretical*
  • Reinforcement, Psychology*