Branching Time Active Inference: The theory and its generality

Neural Netw. 2022 Jul:151:295-316. doi: 10.1016/j.neunet.2022.03.036. Epub 2022 Apr 6.

Abstract

Over the last 10 to 15 years, active inference has helped to explain various brain mechanisms from habit formation to dopaminergic discharge and even modelling curiosity. However, the current implementations suffer from an exponential (space and time) complexity class when computing the prior over all the possible policies up to the time-horizon. Fountas et al. (2020) used Monte Carlo tree search to address this problem, leading to impressive results in two different tasks. In this paper, we present an alternative framework that aims to unify tree search and active inference by casting planning as a structure learning problem. Two tree search algorithms are then presented. The first propagates the expected free energy forward in time (i.e., towards the leaves), while the second propagates it backward (i.e., towards the root). Then, we demonstrate that forward and backward propagations are related to active inference and sophisticated inference, respectively, thereby clarifying the differences between those two planning strategies.

Keywords: Active inference; Free energy principle; Planning; Tree search; Variational message passing.

MeSH terms

  • Algorithms*
  • Brain
  • Entropy
  • Learning*
  • Monte Carlo Method