TY - GEN
T1 - A general early-stopping module for crowdsourced ranking
AU - Shan, Caihua
AU - Hou U, Leong
AU - Mamoulis, Nikos
AU - Cheng, Reynold
AU - Li, Xiang
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Crowdsourcing can be used to determine a total order for an object set (e.g., the top-10 NBA players) based on crowd opinions. This ranking problem is often decomposed into a set of microtasks (e.g., pairwise comparisons). These microtasks are passed to a large number of workers and their answers are aggregated to infer the ranking. The number of microtasks depends on the budget allocated for the problem. Intuitively, the higher the number of microtask answers, the more accurate the ranking becomes. However, it is often hard to decide the budget required for an accurate ranking. We study how a ranking process can be terminated early, and yet achieve a high-quality ranking and great savings in the budget. We use statistical tools to estimate the quality of the ranking result at any stage of the crowdsourcing process, and terminate the process as soon as the desired quality is achieved. Our proposed early-stopping module can be seamlessly integrated with most existing inference algorithms and task assignment methods. We conduct extensive experiments and show that our early-stopping module is better than other existing general stopping criteria.
AB - Crowdsourcing can be used to determine a total order for an object set (e.g., the top-10 NBA players) based on crowd opinions. This ranking problem is often decomposed into a set of microtasks (e.g., pairwise comparisons). These microtasks are passed to a large number of workers and their answers are aggregated to infer the ranking. The number of microtasks depends on the budget allocated for the problem. Intuitively, the higher the number of microtask answers, the more accurate the ranking becomes. However, it is often hard to decide the budget required for an accurate ranking. We study how a ranking process can be terminated early, and yet achieve a high-quality ranking and great savings in the budget. We use statistical tools to estimate the quality of the ranking result at any stage of the crowdsourcing process, and terminate the process as soon as the desired quality is achieved. Our proposed early-stopping module can be seamlessly integrated with most existing inference algorithms and task assignment methods. We conduct extensive experiments and show that our early-stopping module is better than other existing general stopping criteria.
UR - https://www.scopus.com/pages/publications/85092118326
U2 - 10.1007/978-3-030-59416-9_19
DO - 10.1007/978-3-030-59416-9_19
M3 - 会议稿件
AN - SCOPUS:85092118326
SN - 9783030594152
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 314
EP - 330
BT - Database Systems for Advanced Applications - 25th International Conference, DASFAA 2020, Proceedings
A2 - Nah, Yunmook
A2 - Cui, Bin
A2 - Lee, Sang-Won
A2 - Yu, Jeffrey Xu
A2 - Moon, Yang-Sae
A2 - Whang, Steven Euijong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Database Systems for Advanced Applications, DASFAA 2020
Y2 - 24 September 2020 through 27 September 2020
ER -