A Personal Model of Trumpery: Linguistic Deception Detection in a Real-World High-Stakes Setting

Psychol Sci. 2022 Jan;33(1):3-17. doi: 10.1177/09567976211015941. Epub 2021 Dec 21.

Abstract

Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.

Keywords: LIWC; Twitter; deception detection; linguistic analysis; open data; open materials; tailored model.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deception*
  • Humans
  • Linguistics*