The 'Invisible', Often Unhappy Workforce That's Deciding the Future of AI
Two new reports, including a paper led by Google Research, express concern that the current trend to rely on a cheap and often disempowered pool of random global gig workers to create ground truth for machine learning systems could have major downstream implications for AI. Among a range of conclusions, the Google study finds that the crowdworkers' own biases are likely to become embedded into the AI systems whose ground truths will be based on their responses; that widespread unfair work practices (including in the US) on crowdworking platforms are likely to degrade the quality of responses; and that the'consensus' system (effectively a'mini-election' for some piece of ground truth that will influence downstream AI systems) which currently resolves disputes can actually throw away the best and/or most informed responses. That's the bad news; the worse news is that pretty much all the remedies are expensive, time-consuming, or both. The first paper, from five Google researchers, is called Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation; the second, from two researchers at Syracuse University in New York, is called The Origin and Value of Disagreement Among Data Labelers: A Case Study of Individual Differences in Hate Speech Annotation.
Dec-14-2021, 14:55:13 GMT
- AI-Alerts:
- 2021 > 2021-12 > AAAI AI-Alert for Dec 14, 2021 (1.00)
- Country:
- North America > United States > New York (0.25)
- Genre:
- Research Report (0.97)
- Technology: