Hidden Reward: Affect and Its Prediction Errors as Windows Into Subjective Value

Curr Dir Psychol Sci. 2024 Apr;33(2):93-99. doi: 10.1177/09637214231217678. Epub 2024 Jan 19.

Abstract

Scientists increasingly apply concepts from reinforcement learning to affect, but which concepts should apply? And what can their application reveal that we cannot know from directly observable states? An important reinforcement learning concept is the difference between reward expectations and outcomes. Such reward prediction errors have become foundational to research on adaptive behavior in humans, animals, and machines. Owing to historical focus on animal models and observable reward (e.g., food or money), however, relatively little attention has been paid to the fact that humans can additionally report correspondingly expected and experienced affect (e.g., feelings). Reflecting a broader "rise of affectivism," attention has started to shift, revealing explanatory power of expected and experienced feelings-including prediction errors-above and beyond observable reward. We propose that applying concepts from reinforcement learning to affect holds promise for elucidating subjective value. Simultaneously, we urge scientists to test-rather than inherit-concepts that may not apply directly.

Keywords: affect; prediction errors; reinforcement learning; subjective value.