World model learning and inference

Neural Netw. 2021 Dec:144:573-590. doi: 10.1016/j.neunet.2021.09.011. Epub 2021 Sep 21.

Abstract

Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world.

Keywords: Bayesian inference; Cognitive development; Free energy principle; Generative model; Predictive coding; Probabilistic inference.

Publication types

  • Review

MeSH terms

  • Artificial Intelligence*
  • Brain
  • Cognition
  • Humans
  • Intelligence
  • Models, Statistical*