Goto

Collaborating Authors

 claude sonnet 4



AI's Hacking Skills Are Approaching an 'Inflection Point'

WIRED

AI's Hacking Skills Are Approaching an'Inflection Point' AI models are getting so good at finding vulnerabilities that some experts say the tech industry might need to rethink how software is built. Vlad Ionescu and Ariel Herbert-Voss, cofounders of the cybersecurity startup RunSybil, were momentarily confused when their AI tool, Sybil, alerted them to a weakness in a customer's systems last November. Sybil uses a mix of different AI models --as well as a few proprietary technical tricks--to scan computer systems for issues that hackers might exploit, like an unpatched server or a misconfigured database. In this case, Sybil flagged a problem with the customer's deployment of federated GraphQL, a language used to specify how data is accessed over the web through application programming interfaces (APIs). The issue meant that the customer was inadvertently exposing confidential information.


The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees

WIRED

Some AI chatbots have a surprisingly good handle on breaking news. Supporters of Nicolás Maduro and the late Hugo Chávez hold posters with their images after explosions and low-flying aircraft were heard on January 3, 2026, in Caracas, Venezuela. At around 2 am local time in Caracas, Venezuela, US helicopters flew overhead while explosions resounded below. A few hours later, US president Donald Trump posted on his Truth Social platform that Venezuelan president Nicolás Maduro and his wife had been "captured and flown out of the Country." US attorney general Pam Bondi followed with a post on X that Maduro and his wife had been indicted in the Southern District of New York and would "soon face the full wrath of American justice on American soil in American courts."


Institutional AI Sovereignty Through Gateway Architecture: Implementation Report from Fontys ICT

Huijts, Ruud, Suilen, Koen

arXiv.org Artificial Intelligence

To counter fragmented, high-risk adoption of commercial AI tools, we built and ran an institutional AI platform in a six-month, 300-user pilot, showing that a university of applied sciences can offer advanced AI with fair access, transparent risks, controlled costs, and alignment with European law. Commercial AI subscriptions create unequal access and compliance risks through opaque processing and non-EU hosting, yet banning them is neither realistic nor useful. Institutions need a way to provide powerful AI in a sovereign, accountable form. Our solution is a governed gateway platform with three layers: a ChatGPT-style frontend linked to institutional identity that makes model choice explicit; a gateway core enforcing policy, controlling access and budgets, and routing traffic to EU infrastructure by default; and a provider layer wrapping commercial and open-source models in institutional model cards that consolidate vendor documentation into one governance interface. The pilot ran reliably with no privacy incidents and strong adoption, enabling EU-default routing, managed spending, and transparent model choices. Only the gateway pattern combines model diversity and rapid innovation with institutional control. The central insight: AI is not a support function but strategy, demanding dedicated leadership. Sustainable operation requires governance beyond traditional boundaries. We recommend establishing a formal AI Officer role combining technical literacy, governance authority, and educational responsibility. Without it, AI decisions stay ad-hoc and institutional exposure grows. With it, higher-education institutions can realistically operate their own multi-provider AI platform, provided they govern AI as seriously as they teach it.


WebMall -- A Multi-Shop Benchmark for Evaluating Web Agents [Technical Report]

Peeters, Ralph, Steiner, Aaron, Schwarz, Luca, Caspary, Julian Yuya, Bizer, Christian

arXiv.org Artificial Intelligence

LLM-based web agents have the potential to automate long-running web tasks, such as searching for products in multiple e-shops and subsequently ordering the cheapest products that meet the users needs. Benchmarks for evaluating web agents either require agents to perform tasks online using the live Web or offline using simulated environments, which allow for the exact reproduction of the experimental setup. While DeepShop provides an online benchmark that requires agents to perform challenging shopping tasks, existing offline benchmarks such as WebShop, WebArena, or Mind2Web cover only comparatively simple e-commerce tasks that need to be performed against a single shop containing product data from a single source. What is missing is an e-commerce benchmark that simulates multiple shops containing heterogeneous product data and requires agents to perform complex tasks. We fill this gap by introducing WebMall, the first offline multi-shop benchmark for evaluating web agents on challenging comparison shopping tasks. WebMall consists of four simulated shops populated with product data extracted from the Common Crawl. The WebMall tasks range from specific product searches and price comparisons to advanced queries for complementary or substitute products, as well as checkout processes. We validate WebMall using eight agents that differ in observation space, availability of short-term memory, and the employed LLM. The validation highlights the difficulty of the benchmark, with even the best-performing agents achieving task completion rates below 55% in the task categories cheapest product search and vague product search.


Are Large Vision Language Models Truly Grounded in Medical Images? Evidence from Italian Clinical Visual Question Answering

Felizzi, Federico, Riccomi, Olivia, Ferramola, Michele, Causio, Francesco Andrea, Del Medico, Manuel, De Vita, Vittorio, De Mori, Lorenzo, Piscitelli, Alessandra, Risuleo, Pietro Eric, Castaniti, Bianca Destro, Cristiano, Antonio, Longo, Alessia, De Angelis, Luigi, Vassalli, Mariapia, Di Pumpo, Marcello

arXiv.org Artificial Intelligence

Large vision language models (VLMs) have achieved impressive performance on medical visual question answering benchmarks, yet their reliance on visual information remains unclear. We investigate whether frontier VLMs demonstrate genuine visual grounding when answering Italian medical questions by testing four state-of-the-art models: Claude Sonnet 4.5, GPT-4o, GPT-5-mini, and Gemini 2.0 flash exp. Using 60 questions from the EuropeMedQA Italian dataset that explicitly require image interpretation, we substitute correct medical images with blank placeholders to test whether models truly integrate visual and textual information. Our results reveal striking variability in visual dependency: GPT-4o shows the strongest visual grounding with a 27.9pp accuracy drop (83.2% [74.6%, 91.7%] to 55.3% [44.1%, 66.6%]), while GPT-5-mini, Gemini, and Claude maintain high accuracy with modest drops of 8.5pp, 2.4pp, and 5.6pp respectively. Analysis of model-generated reasoning reveals confident explanations for fabricated visual interpretations across all models, suggesting varying degrees of reliance on textual shortcuts versus genuine visual analysis. These findings highlight critical differences in model robustness and the need for rigorous evaluation before clinical deployment.


FLAWS: A Benchmark for Error Identification and Localization in Scientific Papers

Xi, Sarina, Rao, Vishisht, Payan, Justin, Shah, Nihar B.

arXiv.org Artificial Intelligence

The identification and localization of errors is a core task in peer review, yet the exponential growth of scientific output has made it increasingly difficult for human reviewers to reliably detect errors given the limited pool of experts. Recent advances in Large Language Models (LLMs) have sparked interest in their potential to support such evaluation tasks, from academic peer review to automated scientific assessment. However, despite the growing use of LLMs in review systems, their capabilities to pinpoint errors remain underexplored. In this work, we introduce Fault Localization Across Writing in Science (FLAWS), an automated benchmark consisting of 713 paper-error pairs designed to evaluate how effectively LLMs detect errors that undermine key claims in research papers. We construct the benchmark by systematically inserting claim-invalidating errors into peer-reviewed papers using LLMs, paired with an automated evaluation metric that measures whether models can identify and localize these errors. Developing such a benchmark presents unique challenges that we overcome: ensuring that the inserted errors are well-defined, challenging, and relevant to the content of the paper, avoiding artifacts that would make identification trivial, and designing a scalable, automated evaluation metric. On the resulting benchmark, we evaluate five frontier LLMs: Claude Sonnet 4.5, DeepSeek Reasoner v3.1, Gemini 2.5 Pro, GPT 5, and Grok 4. Among these, GPT 5 is the top-performing model, achieving 39.1% identification accuracy when k=10, where k is the number of top-ranked error text candidates generated by the LLM.


Simulated Self-Assessment in Large Language Models: A Psychometric Approach to AI Self-Efficacy

Jackson, Daniel I, Jensen, Emma L, Hussain, Syed-Amad, Sezgin, Emre

arXiv.org Artificial Intelligence

Self-assessment is a key aspect of reliable intelligence, yet evaluations of large language models (LLMs) focus mainly on task accuracy. We adapted the 10-item General Self-Efficacy Scale (GSES) to elicit simulated self-assessments from ten LLMs across four conditions: no task, computational reasoning, social reasoning, and summarization. GSES responses were highly stable across repeated administrations and randomized item orders. However, models showed significantly different self-efficacy levels across conditions, with aggregate scores lower than human norms. All models achieved perfect accuracy on computational and social questions, whereas summarization performance varied widely. Self-assessment did not reliably reflect ability: several low-scoring models performed accurately, while some high-scoring models produced weaker summaries. Follow-up confidence prompts yielded modest, mostly downward revisions, suggesting mild overestimation in first-pass assessments. Qualitative analysis showed that higher self-efficacy corresponded to more assertive, anthropomorphic reasoning styles, whereas lower scores reflected cautious, de-anthropomorphized explanations. Psychometric prompting provides structured insight into LLM communication behavior but not calibrated performance estimates.


FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions

Qin, Bowen, Yue, Chen, Yin, Fang, Wang, Hui, Yao, JG, Liu, Jiakang, Zheng, Jing-Shu, Chen, Miguel Hu, Xuan, Richeng, Meng, Shibei, Zhou, Shiqi, Dai, Teng, Ren, Tong-Shuai, Cui, Wei, Yang, Xi, Du, Xialin, Xu, Xiaojing, Sun, Xue, Li, Xuejing, Liu, Yaming, Liu, Yesheng, Liu, Ying, Lin, Yonghua, Zhao, Yu, Zhang, Yunduo, Luo, Yuwen, He, Zheqi, He, Zhiyuan, Wang, Zhongyuan

arXiv.org Artificial Intelligence

We conduct a moderate-scale contamination-free (to some extent) evaluation of current large reasoning models (LRMs) with some preliminary findings. We also release ROME, our evaluation benchmark for vision language models intended to test reasoning from visual clues. We attach links to the benchmark, evaluation data, and other updates on this website: https://flageval-baai.github.io/LRM-Eval/


AI Debaters are More Persuasive when Arguing in Alignment with Their Own Beliefs

Carro, María Victoria, Mester, Denise Alejandra, Nieto, Facundo, Stanchi, Oscar Agustín, Bergman, Guido Ernesto, Leiva, Mario Alejandro, Sprejer, Eitan, Gangi, Luca Nicolás Forziati, Selasco, Francisca Gauna, Corvalán, Juan Gustavo, Simari, Gerardo I., Martinez, María Vanina

arXiv.org Artificial Intelligence

The core premise of AI debate as a scalable oversight technique is that it is harder to lie convincingly than to refute a lie, enabling the judge to identify the correct position. Yet, existing debate experiments have relied on datasets with ground truth, where lying is reduced to defending an incorrect proposition. This overlooks a subjective dimension: lying also requires the belief that the claim defended is false. In this work, we apply debate to subjective questions and explicitly measure large language models' prior beliefs before experiments. Debaters were asked to select their preferred position, then presented with a judge persona deliberately designed to conflict with their identified priors. This setup tested whether models would adopt sycophantic strategies, aligning with the judge's presumed perspective to maximize persuasiveness, or remain faithful to their prior beliefs. We implemented and compared two debate protocols, sequential and simultaneous, to evaluate potential systematic biases. Finally, we assessed whether models were more persuasive and produced higher-quality arguments when defending positions consistent with their prior beliefs versus when arguing against them. Our main findings show that models tend to prefer defending stances aligned with the judge persona rather than their prior beliefs, sequential debate introduces significant bias favoring the second debater, models are more persuasive when defending positions aligned with their prior beliefs, and paradoxically, arguments misaligned with prior beliefs are rated as higher quality in pairwise comparison. These results can inform human judges to provide higher-quality training signals and contribute to more aligned AI systems, while revealing important aspects of human-AI interaction regarding persuasion dynamics in language models.