Bias of AI-generated content: an examination of news produced by large language models

Sci Rep. 2024 Mar 4;14(1):5224. doi: 10.1038/s41598-024-55686-2.

Abstract

Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.

Keywords: AI-generated content (AIGC); Bias; ChatGPT; Gender bias; Generative AI; Large language model (LLM); Prompt; Racial bias.

MeSH terms

  • Animals
  • Aortic Valve Insufficiency*
  • Bias
  • Camelids, New World*
  • Female
  • Humans
  • Language
  • Male
  • Sexism