Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans?
Pastorino, Valeria, Moosavi, Nafise Sadat
–arXiv.org Artificial Intelligence
Framing in media critically shapes public perception by selectively emphasizing some details while downplaying others. With the rise of large language models in automated news and content creation, there is growing concern that these systems may introduce or even amplify framing biases compared to human authors. In this paper, we explore how framing manifests in both out-of-the-box and fine-tuned LLM-generated news content. Our analysis reveals that, particularly in politically and socially sensitive contexts, LLMs tend to exhibit more pronounced framing than their human counterparts. In addition, we observe significant variation in framing tendencies across different model architectures, with some models displaying notably higher biases. These findings point to the need for effective post-training mitigation strategies and tighter evaluation frameworks to ensure that automated news content upholds the standards of balanced reporting.
arXiv.org Artificial Intelligence
May-9-2025
- Country:
- Asia
- China
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Europe
- Bulgaria (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- United Kingdom
- England > South Yorkshire
- Sheffield (0.04)
- Scotland (0.04)
- England > South Yorkshire
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Illinois
- Cook County > Chicago (0.04)
- Mexico > Mexico City
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government > Regional Government
- Leisure & Entertainment > Sports (0.69)
- Media (0.69)
- Technology: