A comparison of human and GPT-4 use of probabilistic phrases in a coordination game

Sci Rep. 2024 Mar 21;14(1):6835. doi: 10.1038/s41598-024-56740-9.

Abstract

English speakers use probabilistic phrases such as likely to communicate information about the probability or likelihood of events. Communication is successful to the extent that the listener grasps what the speaker means to convey and, if communication is successful, individuals can potentially coordinate their actions based on shared knowledge about uncertainty. We first assessed human ability to estimate the probability and the ambiguity (imprecision) of twenty-three probabilistic phrases in a coordination game in two different contexts, investment advice and medical advice. We then had GPT-4 (OpenAI), a Large Language Model, complete the same tasks as the human participants. We found that GPT-4's estimates of probability both in the Investment and Medical Contexts were as close or closer to that of the human participants as the human participants' estimates were to one another. However, further analyses of residuals disclosed small but significant differences between human and GPT-4 performance. Human probability estimates were compressed relative to those of GPT-4. Estimates of probability for both the human participants and GPT-4 were little affected by context. We propose that evaluation methods based on coordination games provide a systematic way to assess what GPT-4 and similar programs can and cannot do.

Keywords: Ambiguity; GPT-4; LLM; Pragmatics; Probabilistic phrases; Probability.

MeSH terms

  • Communication*
  • Humans
  • Investments*
  • Knowledge
  • Language
  • Probability