Probabilistic language models in cognitive neuroscience: Promises and pitfalls

Neurosci Biobehav Rev. 2017 Dec:83:579-588. doi: 10.1016/j.neubiorev.2017.09.001. Epub 2017 Sep 5.

Abstract

Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research.

Keywords: Cognitive neuroscience of language; Computational linguistics; EEG; Entropy; Information theory; MEG; Probabilistic language models; Surprisal; fMRI.

Publication types

  • Review

MeSH terms

  • Brain / diagnostic imaging
  • Brain / physiology*
  • Cognition*
  • Cognitive Neuroscience*
  • Comprehension / physiology*
  • Humans
  • Language*
  • Models, Statistical*
  • Neuroimaging