Value representations in the rodent orbitofrontal cortex drive learning, not choice

Elife. 2022 Aug 17:11:e64575. doi: 10.7554/eLife.64575.

Abstract

Humans and animals make predictions about the rewards they expect to receive in different situations. In formal models of behavior, these predictions are known as value representations, and they play two very different roles. Firstly, they drive choice: the expected values of available options are compared to one another, and the best option is selected. Secondly, they support learning: expected values are compared to rewards actually received, and future expectations are updated accordingly. Whether these different functions are mediated by different neural representations remains an open question. Here, we employ a recently developed multi-step task for rats that computationally separates learning from choosing. We investigate the role of value representations in the rodent orbitofrontal cortex, a key structure for value-based cognition. Electrophysiological recordings and optogenetic perturbations indicate that these representations do not directly drive choice. Instead, they signal expected reward information to a learning process elsewhere in the brain that updates choice mechanisms.

Keywords: decision making; electrophysiology; learning; neuroscience; orbitofrontal cortex; planning; rat; reinforcement learning.

MeSH terms

  • Animals
  • Choice Behavior / physiology
  • Cognition / physiology
  • Decision Making / physiology
  • Humans
  • Prefrontal Cortex* / physiology
  • Rats
  • Reward
  • Rodentia*

Associated data

  • figshare/10.6084/m9.figshare.20449140