Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests
Gao, Xi Alice (Harvard University) | Bachrach, Yoram (Microsoft Research) | Key, Peter (Microsoft Research) | Graepel, Thore (Microsoft Research)
We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.
Jul-21-2012
- Country:
- North America > United States (0.29)
- Genre:
- Research Report
- Experimental Study (0.87)
- New Finding (1.00)
- Research Report
- Technology:
- Information Technology
- Artificial Intelligence (1.00)
- Communications > Social Media
- Crowdsourcing (1.00)
- Game Theory (1.00)
- Information Technology