Guiding exploration by pre-existing knowledge without modifying reward

Neural Netw. 2007 Aug;20(6):736-47. doi: 10.1016/j.neunet.2007.02.001. Epub 2007 Feb 12.

Abstract

Reinforcement learning is based on exploration of the environment and receiving reward that indicates which actions taken by the agent are good and which ones are bad. In many applications receiving even the first reward may require long exploration, during which the agent has no information about its progress. This paper presents an approach that makes it possible to use pre-existing knowledge about the task for guiding exploration through the state space. Concepts of short- and long-term memory combine guidance by pre-existing knowledge with reinforcement learning methods for value function estimation in order to make learning faster while allowing the agent to converge towards a good policy.

MeSH terms

  • Computer Simulation
  • Exploratory Behavior / physiology*
  • Humans
  • Knowledge*
  • Learning*
  • Models, Psychological
  • Predictive Value of Tests
  • Reward*