Goto

Collaborating Authors

 judgement




Can LLMs Evaluate What They Cannot Annotate? Revisiting LLM Reliability in Hate Speech Detection

Piot, Paloma, Otero, David, Martín-Rodilla, Patricia, Parapar, Javier

arXiv.org Artificial Intelligence

Hate speech spreads widely online, harming individuals and communities, making automatic detection essential for large-scale moderation, yet detecting it remains difficult. Part of the challenge lies in subjectivity: what one person flags as hate speech, another may see as benign. Traditional annotation agreement metrics, such as Cohen's $κ$, oversimplify this disagreement, treating it as an error rather than meaningful diversity. Meanwhile, Large Language Models (LLMs) promise scalable annotation, but prior studies demonstrate that they cannot fully replace human judgement, especially in subjective tasks. In this work, we reexamine LLM reliability using a subjectivity-aware framework, cross-Rater Reliability (xRR), revealing that even under fairer lens, LLMs still diverge from humans. Yet this limitation opens an opportunity: we find that LLM-generated annotations can reliably reflect performance trends across classification models, correlating with human evaluations. We test this by examining whether LLM-generated annotations preserve the relative ordering of model performance derived from human evaluation (i.e. whether models ranked as more reliable by human annotators preserve the same order when evaluated with LLM-generated labels). Our results show that, although LLMs differ from humans at the instance level, they reproduce similar ranking and classification patterns, suggesting their potential as proxy evaluators. While not a substitute for human annotators, they might serve as a scalable proxy for evaluation in subjective NLP tasks.


Artificial Intelligence Applications in Horizon Scanning for Infectious Diseases

Miles, Ian, Wakimoto, Mayumi, Meira, Wagner Jr., Paula, Daniela, Ticiane, Daylene, Rosa, Bruno, Biddulph, Jane, Georgiou, Stelios, Ermida, Valdir

arXiv.org Artificial Intelligence

This review explores the integration of Artificial Intelligence into Horizon Scanning, focusing on identifying and responding to emerging threats and opportunities linked to Infectious Diseases. We examine how AI tools can enhance signal detection, data monitoring, scenario analysis, and decision support. We also address the risks associated with AI adoption and propose strategies for effective implementation and governance. The findings contribute to the growing body of Foresight literature by demonstrating the potential and limitations of AI in Public Health preparedness.


Scaling Generative Verifiers For Natural Language Mathematical Proof Verification And Selection

Mahdavi, Sadegh, Kisacanin, Branislav, Toshniwal, Shubham, Du, Wei, Moshkov, Ivan, Armstrong, George, Liao, Renjie, Thrampoulidis, Christos, Gitman, Igor

arXiv.org Artificial Intelligence

Large language models have achieved remarkable success on final-answer mathematical problems, largely due to the ease of applying reinforcement learning with verifiable rewards. However, the reasoning underlying these solutions is often flawed. Advancing to rigorous proof-based mathematics requires reliable proof verification capabilities. We begin by analyzing multiple evaluation setups and show that focusing on a single benchmark can lead to brittle or misleading conclusions. To address this, we evaluate both proof-based and final-answer reasoning to obtain a more reliable measure of model performance. We then scale two major generative verification methods (GenSelect and LLM-as-a-Judge) to millions of tokens and identify their combination as the most effective framework for solution verification and selection. We further show that the choice of prompt for LLM-as-a-Judge significantly affects the model's performance, but reinforcement learning can reduce this sensitivity. However, despite improving proof-level metrics, reinforcement learning does not enhance final-answer precision, indicating that current models often reward stylistic or procedural correctness rather than mathematical validity. Our results establish practical guidelines for designing and evaluating scalable proof-verification and selection systems.






Surjective Independence of Causal Influences for Local Bayesian Network Structures

Drury, Kieran, Barons, Martine J., Smith, Jim Q.

arXiv.org Artificial Intelligence

The very expressiveness of Bayesian networks can introduce fresh challenges due to the large number of relationships they often model. In many domains, it is thus often essential to supplement any available data with elicited expert judgements. This in turn leads to two key challenges: the cognitive burden of these judgements is often very high, and there are a very large number of judgements required to obtain a full probability model. We can mitigate both issues by introducing assumptions such as independence of causal influences (ICI) on the local structures throughout the network, restricting the parameter space of the model. However, the assumption of ICI is often unjustified and overly strong. In this paper, we introduce the surjective independence of causal influences (SICI) model which relaxes the ICI assumption and provides a more viable, practical alternative local structure model that facilitates efficient Bayesian network parameterisation.