Active Multilabel Crowd Consensus

IEEE Trans Neural Netw Learn Syst. 2021 Apr;32(4):1448-1459. doi: 10.1109/TNNLS.2020.2984729. Epub 2021 Apr 2.

Abstract

Crowdsourcing is an economic and efficient strategy aimed at collecting annotations of data through an online platform. Crowd workers with different expertise are paid for their service, and the task requester usually has a limited budget. How to collect reliable annotations for multilabel data and how to compute the consensus within budget are an interesting and challenging, but rarely studied, problem. In this article, we propose a novel approach to accomplish active multilabel crowd consensus (AMCC). AMCC accounts for the commonality and individuality of workers and assumes that workers can be organized into different groups. Each group includes a set of workers who share a similar annotation behavior and label correlations. To achieve an effective multilabel consensus, AMCC models workers' annotations via a linear combination of commonality and individuality and reduces the impact of unreliable workers by assigning smaller weights to their groups. To collect reliable annotations with reduced cost, AMCC introduces an active crowdsourcing learning strategy that selects sample-label-worker triplets. In a triplet, the selected sample and label are the most informative for the consensus model, and the selected worker can reliably annotate the sample at a low cost. Our experimental results on multilabel data sets demonstrate the advantages of AMCC over state-of-the-art solutions on computing crowd consensus and on reducing the budget by choosing cost-effective triplets.

Publication types

  • Research Support, Non-U.S. Gov't