Efficient crowdsourcing of crowd-generated microtasks

PLoS One. 2020 Dec 17;15(12):e0244245. doi: 10.1371/journal.pone.0244245. eCollection 2020.

Abstract

Allowing members of the crowd to propose novel microtasks for one another is an effective way to combine the efficiencies of traditional microtask work with the inventiveness and hypothesis generation potential of human workers. However, microtask proposal leads to a growing set of tasks that may overwhelm limited crowdsourcer resources. Crowdsourcers can employ methods to utilize their resources efficiently, but algorithmic approaches to efficient crowdsourcing generally require a fixed task set of known size. In this paper, we introduce cost forecasting as a means for a crowdsourcer to use efficient crowdsourcing algorithms with a growing set of microtasks. Cost forecasting allows the crowdsourcer to decide between eliciting new tasks from the crowd or receiving responses to existing tasks based on whether or not new tasks will cost less to complete than existing tasks, efficiently balancing resources as crowdsourcing occurs. Experiments with real and synthetic crowdsourcing data show that cost forecasting leads to improved accuracy. Accuracy and efficiency gains for crowd-generated microtasks hold the promise to further leverage the creativity and wisdom of the crowd, with applications such as generating more informative and diverse training data for machine learning applications and improving the performance of user-generated content and question-answering platforms.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms*
  • Computer Simulation
  • Crowdsourcing / methods*
  • Humans
  • Machine Learning*
  • Problem Solving*
  • Task Performance and Analysis*

Grants and funding

This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634 (JB). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.