Bias in LLMs as Annotators: The Effect of Party Cues on Labelling Decision by Large Language Models
Vera, Sebastian Vallejo, Driggers, Hunter
–arXiv.org Artificial Intelligence
The increasing sophistication of large language models (LLMs) has allowed for their more prominent presence in political science research. One particular area gathering significant attention in the field is the use of LLMs as annotators. Research has shown promising results, with LLMs often outperforming human coders (Gilardi, Alizadeh and Kubli, 2023) and providing comparable accuracy when labelling political text, across multiple languages (Heseltine and Clemm von Hohenberg, 2024). While researchers have evaluated the performance of LLMs as annotators across different domains, there still little information on how the known biases of LLMs (see Gallegos et al., 2024) can affect their performance. For human annotators, studies show that political cues, such as party, have an effect on their coding decisions (Laver and Garry, 2000; Benoit et al., 2016; Ennser-Jedenastik and Meyer, 2018).
arXiv.org Artificial Intelligence
Aug-28-2024
- Genre:
- Research Report
- Experimental Study (0.94)
- New Finding (1.00)
- Research Report
- Industry:
- Government > Immigration & Customs (0.71)
- Technology: