When majority voting fails: Comparing quality assurance methods for noisy human computation environment

Sun, Yu-An, Dance, Christopher

arXiv.org Artificial Intelligence 

ABSTRACT Quality assurance remains a key topic in human computation research. Prior work indicates that majority voting is effective for low difficulty tasks, but has limitations for harder tasks. This paper explores two methods of addressing this problem: tournament selection and elimination selection, which exploit 2-, 3-and 4-way comparisons between different answers to human computation tasks. Our experimental results and statistical analyses show that both methods produce the correct answer in noisy human computation environment more often than majority voting. Furthermore, we find that the use of 4-way comparisons can significantly reduce the cost of quality assurance relative to the use of 2-way comparisons. INTRODUCTION Human computation is a growing research field that holds promise of humans and computers working seamlessly together to implement powerful systems. Algorithmically aggregating outputs from human computation workers is the key to such an integrated human-computer system (Little & Sun 2011).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found