Predictive Coding or Just Feature Discovery? An Alternative Account of Why Language Models Fit Brain Data

Neurobiol Lang (Camb). 2024 Apr 1;5(1):64-79. doi: 10.1162/nol_a_00087. eCollection 2024.

Abstract

Many recent studies have shown that representations drawn from neural network language models are extremely effective at predicting brain responses to natural language. But why do these models work so well? One proposed explanation is that language models and brains are similar because they have the same objective: to predict upcoming words before they are perceived. This explanation is attractive because it lends support to the popular theory of predictive coding. We provide several analyses that cast doubt on this claim. First, we show that the ability to predict future words does not uniquely (or even best) explain why some representations are a better match to the brain than others. Second, we show that within a language model, representations that are best at predicting future words are strictly worse brain models than other representations. Finally, we argue in favor of an alternative explanation for the success of language models in neuroscience: These models are effective at predicting brain responses because they generally capture a wide variety of linguistic phenomena.

Keywords: encoding models; language models; predictive coding.