The evolution of continuous learning of the structure of the environment

J R Soc Interface. 2014 Jan 8;11(92):20131091. doi: 10.1098/rsif.2013.1091. Print 2014 Mar 6.

Abstract

Continuous, 'always on', learning of structure from a stream of data is studied mainly in the fields of machine learning or language acquisition, but its evolutionary roots may go back to the first organisms that were internally motivated to learn and represent their environment. Here, we study under what conditions such continuous learning (CL) may be more adaptive than simple reinforcement learning and examine how it could have evolved from the same basic associative elements. We use agent-based computer simulations to compare three learning strategies: simple reinforcement learning; reinforcement learning with chaining (RL-chain) and CL that applies the same associative mechanisms used by the other strategies, but also seeks statistical regularities in the relations among all items in the environment, regardless of the initial association with food. We show that a sufficiently structured environment favours the evolution of both RL-chain and CL and that CL outperforms the other strategies when food is relatively rare and the time for learning is limited. This advantage of internally motivated CL stems from its ability to capture statistical patterns in the environment even before they are associated with food, at which point they immediately become useful for planning.

Keywords: decision-making; evolution of cognition; foraging theory; representation; statistical learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adaptation, Biological / physiology*
  • Biological Evolution*
  • Cognition / physiology*
  • Computer Simulation
  • Decision Making / physiology
  • Environment*
  • Learning / physiology*
  • Models, Biological*
  • Species Specificity