bonus scheme
Bonus or Not? Learn to Reward in Crowdsourcing
Yin, Ming (Harvard University) | Chen, Yiling (Harvard University)
Recent work has shown that the quality of work produced in a crowdsourcing working session can be influenced by the presence of performance-contingent financial incentives, such as bonuses for exceptional performance, in the session. We take an algorithmic approach to decide when to offer bonuses in a working session to improve the overall utility that a requester derives from the session. Specifically, we propose and train an input-output hidden Markov model to learn the impact of bonuses on work quality and then use this model to dynamically decide whether to offer a bonus on each task in a working session to maximize a requester’s utility. Experiments on Amazon Mechanical Turk show that our approach leads to higher utility for the requester than fixed and random bonus schemes do. Simulations on synthesized data sets further demonstrate the robustness of our approach against different worker population and worker behavior in improving requester utility.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California (0.04)
- Asia > China > Hong Kong (0.04)
- Information Technology > Communications > Social Media > Crowdsourcing (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
Incentives to Counter Bias in Human Computation
Faltings, Boi (EPFL) | Jurca, Radu (Google) | Pu, Pearl (EPFL) | Tran, Bao Duy (EPFL)
In online labor platforms such as Amazon Mechanical Turk, a good strategy to obtain quality answers is to take aggregate answers submitted by multiple workers, exploiting the wisdom of the crowd. However, human computation is susceptible to systematic biases which cannot be corrected by using multiple workers. We investigate a game-theoretic bonus scheme, called Peer Truth Serum (PTS), to overcome this problem. We report on the design and outcomes of a set of experiments to validate this scheme. Results show Peer Truth Serum can indeed correct the biases and increase the answer accuracy by up to 80%.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > New York (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.48)