Beyond the Link: Assessing LLMs' ability to Classify Political Content across Global Media
De La Fuente-Cuesta, Alejandro, Martinez-Serra, Alberto, Visscher, Nienke, Castro, Laia, Cardenal, Ana S.
–arXiv.org Artificial Intelligence
The use of large language models (LLMs) is becoming common in political science and digital media research. While LLMs have demonstrated ability in labelling tasks, their effectiveness to classify Political Content (PC) from URLs remains underexplored. This article evaluates whether LLMs can accurately distinguish PC from non-PC using both the text and the URLs of news articles across five countries (France, Germany, Spain, the UK, and the US) and their different languages. Using cutting-edge models, we benchmark their performance against human-coded data to assess whether URL-level analysis can approximate full-text analysis. Our findings show that URLs embed relevant information and can serve as a scalable, cost-effective alternative to discern PC. However, we also uncover systematic biases: LLMs seem to overclassify centrist news as political, leading to false positives that may distort further analyses. We conclude by outlining methodological recommendations on the use of LLMs in political science research.
arXiv.org Artificial Intelligence
Nov-5-2025
- Country:
- Europe
- Finland > Uusimaa
- Helsinki (0.04)
- France (0.25)
- Germany (0.25)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.05)
- United Kingdom (0.04)
- Finland > Uusimaa
- North America > United States (0.14)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Government (0.94)
- Media > News (0.68)
- Technology: