Goto

Collaborating Authors

 bonus scheme


Bonus or Not? Learn to Reward in Crowdsourcing

Yin, Ming (Harvard University) | Chen, Yiling (Harvard University)

AAAI Conferences

Recent work has shown that the quality of work produced in a crowdsourcing working session can be influenced by the presence of performance-contingent financial incentives, such as bonuses for exceptional performance, in the session. We take an algorithmic approach to decide when to offer bonuses in a working session to improve the overall utility that a requester derives from the session. Specifically, we propose and train an input-output hidden Markov model to learn the impact of bonuses on work quality and then use this model to dynamically decide whether to offer a bonus on each task in a working session to maximize a requester’s utility. Experiments on Amazon Mechanical Turk show that our approach leads to higher utility for the requester than fixed and random bonus schemes do. Simulations on synthesized data sets further demonstrate the robustness of our approach against different worker population and worker behavior in improving requester utility.


Incentives to Counter Bias in Human Computation

Faltings, Boi (EPFL) | Jurca, Radu (Google) | Pu, Pearl (EPFL) | Tran, Bao Duy (EPFL)

AAAI Conferences

In online labor platforms such as Amazon Mechanical Turk, a good strategy to obtain quality answers is to take aggregate answers submitted by multiple workers, exploiting the wisdom of the crowd. However, human computation is susceptible to systematic biases which cannot be corrected by using multiple workers. We investigate a game-theoretic bonus scheme, called Peer Truth Serum (PTS), to overcome this problem. We report on the design and outcomes of a set of experiments to validate this scheme. Results show Peer Truth Serum can indeed correct the biases and increase the answer accuracy by up to 80%.