GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction
Felkner, Virginia K., Thompson, Jennifer A., May, Jonathan
–arXiv.org Artificial Intelligence
Social biases in LLMs are usually measured via bias benchmark datasets. Current benchmarks have limitations in scope, grounding, quality, and human effort required. Previous work has shown success with a community-sourced, rather than crowd-sourced, approach to benchmark development. However, this work still required considerable effort from annotators with relevant lived experience. This paper explores whether an LLM (specifically, GPT-3.5-Turbo) can assist with the task of developing a bias benchmark dataset from responses to an open-ended community survey. We also extend the previous work to a new community and set of biases: the Jewish community and antisemitism. Our analysis shows that GPT-3.5-Turbo has poor performance on this annotation task and produces unacceptable quality issues in its output. Thus, we conclude that GPT-3.5-Turbo is not an appropriate substitute for human annotation in sensitive tasks related to social biases, and that its use actually negates many of the benefits of community-sourcing bias benchmarks.
arXiv.org Artificial Intelligence
May-24-2024
- Country:
- Asia > Middle East
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California (0.14)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.05)
- Virginia (0.04)
- Canada > Ontario
- Genre:
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine (0.46)
- Technology: