Evaluating information-theoretic measures of word prediction in naturalistic sentence reading

Neuropsychologia. 2019 Nov:134:107198. doi: 10.1016/j.neuropsychologia.2019.107198. Epub 2019 Sep 22.

Abstract

We review information-theoretic measures of cognitive load during sentence processing that have been used to quantify word prediction effort. Two such measures, surprisal and next-word entropy, suffer from shortcomings when employed for a predictive processing view. We propose a novel metric, lookahead information gain, that can overcome these short-comings. We estimate the different measures using probabilistic language models. Subsequently, we put them to the test by analysing how well the estimated measures predict human processing effort in three data sets of naturalistic sentence reading. Our results replicate the well known effect of surprisal on word reading effort, but do not indicate a role of next-word entropy or lookahead information gain. Our computational results suggest that, in a predictive processing system, the costs of predicting may outweigh the gains. This idea poses a potential limit to the value of a predictive mechanism for the processing of language. The result illustrates the unresolved problem of finding estimations of word-by-word prediction that, first, are truly independent of perceptual processing of the to-be-predicted words, second, are statistically reliable predictors of experimental data, and third, can be derived from more general assumptions about the cognitive processes involved.

Keywords: Computational linguistics; Electroencephalography; Lookahead information gain; Next-word entropy; Predictive processing; Psycholinguistics; Reading time; Sentence processing; Surprisal.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Algorithms
  • Cognition / physiology
  • Comprehension
  • Entropy
  • Humans
  • Information Theory*
  • Natural Language Processing*
  • Reading*