Can individual subjective confidence in training questions predict group performance in test questions?

PLoS One. 2023 Mar 7;18(3):e0280984. doi: 10.1371/journal.pone.0280984. eCollection 2023.

Abstract

When people have to solve many tasks, they can aggregate diverse individuals' judgments using the majority rule, which often improves the accuracy of judgments (wisdom of crowds). When aggregating judgments, individuals' subjective confidence is a useful cue for deciding which judgments to accept. However, can confidence in one task set predict performance not only in the same task set, but also in another? We examined this issue through computer simulations using behavioral data obtained from binary-choice experimental tasks. In our simulations, we developed a "training-test" approach: We split the questions used in the behavioral experiments into "training questions" (as questions to identify individuals' confidence levels) and "test questions" (as questions to be solved), similar to the cross-validation method in machine learning. We found that (i) through analyses of behavioral data, confidence in a certain question could predict accuracy in the same question, but not always well in another question. (ii) Through a computer simulation for the accordance of two individuals' judgments, individuals with high confidence in one training question tended to make less diverse judgments in other test questions. (iii) Through a computer simulation of group judgments, the groups constructed from individuals with high confidence in the training question(s) generally performed well; however, their performance sometimes largely decreased in the test questions especially when only one training question was available. These results suggest that when situations are highly uncertain, an effective strategy is to aggregate various individuals regardless of confidence levels in the training questions to avoid decreasing the group accuracy in test questions. We believe that our simulations, which follow a "training-test" approach, provide practical implications in terms of retaining groups' ability to solve many tasks.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Computer Simulation
  • Group Processes
  • Humans
  • Judgment*
  • Machine Learning

Grants and funding

The present study was funded by Japan society for the promotion of science (JSPS) KAKENHI No. 18H03501 and No. 22H03915 (to HH). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.