Performance errors during rodent learning reflect a dynamic choice strategy

Curr Biol. 2024 Apr 26:S0960-9822(24)00457-3. doi: 10.1016/j.cub.2024.04.017. Online ahead of print.

Abstract

Humans, even as infants, use cognitive strategies, such as exploration and hypothesis testing, to learn about causal interactions in the environment. In animal learning studies, however, it is challenging to disentangle higher-order behavioral strategies from errors arising from imperfect task knowledge or inherent biases. Here, we trained head-fixed mice on a wheel-based auditory two-choice task and exploited the intra- and inter-animal variability to understand the drivers of errors during learning. During learning, performance errors are dominated by a choice bias, which, despite appearing maladaptive, reflects a dynamic strategy. Early in learning, mice develop an internal model of the task contingencies such that violating their expectation of reward on correct trials (by using short blocks of non-rewarded "probe" trials) leads to an abrupt shift in strategy. During the probe block, mice behave more accurately with less bias, thereby using their learned stimulus-action knowledge to test whether the outcome contingencies have changed. Despite having this knowledge, mice continued to exhibit a strong choice bias during reinforced trials. This choice bias operates on a timescale of tens to hundreds of trials with a dynamic structure, shifting between left, right, and unbiased epochs. Biased epochs also coincided with faster motor kinematics. Although bias decreased across learning, expert mice continued to exhibit short bouts of biased choices interspersed with longer bouts of unbiased choices and higher performance. These findings collectively suggest that during learning, rodents actively probe their environment in a structured manner to refine their decision-making and maintain long-term flexibility.

Keywords: auditory behavior; behavior; decision-making; exploration; learning; reinforcement learning.