Learning and forgetting using reinforced Bayesian change detection

PLoS Comput Biol. 2019 Apr 17;15(4):e1006713. doi: 10.1371/journal.pcbi.1006713. eCollection 2019 Apr.

Abstract

Agents living in volatile environments must be able to detect changes in contingencies while refraining to adapt to unexpected events that are caused by noise. In Reinforcement Learning (RL) frameworks, this requires learning rates that adapt to past reliability of the model. The observation that behavioural flexibility in animals tends to decrease following prolonged training in stable environment provides experimental evidence for such adaptive learning rates. However, in classical RL models, learning rate is either fixed or scheduled and can thus not adapt dynamically to environmental changes. Here, we propose a new Bayesian learning model, using variational inference, that achieves adaptive change detection by the use of Stabilized Forgetting, updating its current belief based on a mixture of fixed, initial priors and previous posterior beliefs. The weight given to these two sources is optimized alongside the other parameters, allowing the model to adapt dynamically to changes in environmental volatility and to unexpected observations. This approach is used to implement the "critic" of an actor-critic RL model, while the actor samples the resulting value distributions to choose which action to undertake. We show that our model can emulate different adaptation strategies to contingency changes, depending on its prior assumptions of environmental stability, and that model parameters can be fit to real data with high accuracy. The model also exhibits trade-offs between flexibility and computational costs that mirror those observed in real data. Overall, the proposed method provides a general framework to study learning flexibility and decision making in RL contexts.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adaptation, Psychological
  • Algorithms
  • Animals
  • Bayes Theorem*
  • Behavior, Animal
  • Computational Biology
  • Computer Simulation
  • Decision Making
  • Decision Support Techniques
  • Humans
  • Learning*
  • Models, Psychological
  • Reinforcement, Psychology
  • Reward

Grants and funding

This work was performed at the Institute of Neuroscience (IoNS) of the Université catholique de Louvain (Brussels, Belgium); it was supported by grants from the ARC (Actions de Recherche Concertées, Communauté Francaise de Belgique), from the Fondation Médicale Reine Elisabeth (FMRE), from the Fonds de la Recherche Scientifique (F.R.S.-FNRS) and from IdEx Bordeaux. A.Z. was a Senior Research Associate supported by INNOVIRIS and is currently 1st grade researcher for the french National Center for Scientific Research (CNRS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.