Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond

Behav Brain Res. 2016 Sep 15:311:110-121. doi: 10.1016/j.bbr.2016.05.017. Epub 2016 May 9.

Abstract

Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action.

Keywords: Action selection; Corticostriatal; Nonlinear dynamics; Reinforcement learning; Reward prediction error; Winner-take-all.

Publication types

  • Review

MeSH terms

  • Algorithms
  • Animals
  • Cerebral Cortex / physiology*
  • Choice Behavior / physiology*
  • Corpus Striatum / physiology*
  • Humans
  • Models, Neurological
  • Motor Activity / physiology*
  • Neural Pathways / physiology
  • Reinforcement, Psychology