Goto

Collaborating Authors

 Mariupol


From shrimp Jesus to erotic tractors: how viral AI slop took over the internet

The Guardian

Clockwise from top left: Shrimp Jesus, Nayib Bukele, Justin Bieber and Super Cat League. Clockwise from top left: Shrimp Jesus, Nayib Bukele, Justin Bieber and Super Cat League. In the algorithm-driven economy of 2025, one man's shrimp Jesus is another man's side hustle. AI slop - the low-quality, surreal content flooding social media platforms, designed to farm views - is a phenomenon, some would say the phenomenon of the 2024 and 2025 internet. Merriam-Webster's word of the year this year is "slop", referring exclusively to the internet variety.



Japan accelerates self-driving truck tests

The Japan Times

A driver takes his hands off the wheel of an Isuzu self-driving truck during a test run in Mukawa, Hokkaido, on Nov. 18. In the face of a serious shortage of drivers in the logistics industry, Japan's government and commercial vehicle-makers are accelerating experiments aimed at putting self-driving trucks into practical use. They are aiming to attain Level 4 autonomous driving, or driving without human intervention under certain conditions. In addition to carrying out the trials, they will also make efforts to gain the public's understanding for self-driving trucks to ease concerns. Autonomous driving for large vehicles carries a risk of being rejected by the public with a single accident, a senior official of a commercial vehicle-maker said. In a time of both misinformation and too much information, quality journalism is more crucial than ever.


Community-Aligned Behavior Under Uncertainty: Evidence of Epistemic Stance Transfer in LLMs

Gerard, Patrick, Chang, Aiden, Volkova, Svitlana

arXiv.org Artificial Intelligence

When large language models (LLMs) are aligned to a specific online community, do they exhibit generalizable behavioral patterns that mirror that community's attitudes and responses to new uncertainty, or are they simply recalling patterns from training data? We introduce a framework to test epistemic stance transfer: targeted deletion of event knowledge, validated with multiple probes, followed by evaluation of whether models still reproduce the community's organic response patterns under ignorance. Using Russian--Ukrainian military discourse and U.S. partisan Twitter data, we find that even after aggressive fact removal, aligned LLMs maintain stable, community-specific behavioral patterns for handling uncertainty. These results provide evidence that alignment encodes structured, generalizable behaviors beyond surface mimicry. Our framework offers a systematic way to detect behavioral biases that persist under ignorance, advancing efforts toward safer and more transparent LLM deployments.


Cross-Lingual Stability and Bias in Instruction-Tuned Language Models for Humanitarian NLP

Nemkova, Poli, Adhikari, Amrit, Pearson, Matthew, Sadu, Vamsi Krishna, Albert, Mark V.

arXiv.org Artificial Intelligence

Humanitarian organizations face a critical choice: invest in costly commercial APIs or rely on free open-weight models for multilingual human rights monitoring. While commercial systems offer reliability, open-weight alternatives lack empirical validation -- especially for low-resource languages common in conflict zones. This paper presents the first systematic comparison of commercial and open-weight large language models (LLMs) for human-rights-violation detection across seven languages, quantifying the cost-reliability trade-off facing resource-constrained organizations. Across 78,000 multilingual inferences, we evaluate six models -- four instruction-aligned (Claude-Sonnet-4, DeepSeek-V3, Gemini-Flash-2.0, GPT-4.1-mini) and two open-weight (LLaMA-3-8B, Mistral-7B) -- using both standard classification metrics and new measures of cross-lingual reliability: Calibration Deviation (CD), Decision Bias (B), Language Robustness Score (LRS), and Language Stability Score (LSS). Results show that alignment, not scale, determines stability: aligned models maintain near-invariant accuracy and balanced calibration across typologically distant and low-resource languages (e.g., Lingala, Burmese), while open-weight models exhibit significant prompt-language sensitivity and calibration drift. These findings demonstrate that multilingual alignment enables language-agnostic reasoning and provide practical guidance for humanitarian organizations balancing budget constraints with reliability in multilingual deployment.


Assessing Web Search Credibility and Response Groundedness in Chat Assistants

Vykopal, Ivan, Pikuliak, Matúš, Ostermann, Simon, Šimko, Marián

arXiv.org Artificial Intelligence

Chat assistants increasingly integrate web search functionality, enabling them to retrieve and cite external sources. While this promises more reliable answers, it also raises the risk of amplifying misinformation from low-credibility sources. In this paper, we introduce a novel methodology for evaluating assistants' web search behavior, focusing on source credibility and the groundedness of responses with respect to cited sources. Using 100 claims across five misinformation-prone topics, we assess GPT-4o, GPT-5, Perplexity, and Qwen Chat. Our findings reveal differences between the assistants, with Perplexity achieving the highest source credibility, whereas GPT-4o exhibits elevated citation of non-credibility sources on sensitive topics. This work provides the first systematic comparison of commonly used chat assistants for fact-checking behavior, offering a foundation for evaluating AI systems in high-stakes information environments.


Propaganda and Information Dissemination in the Russo-Ukrainian War: Natural Language Processing of Russian and Western Twitter Narratives

Gouliev, Zaur

arXiv.org Artificial Intelligence

The conflict in Ukraine has been not only characterised by military engagement but also by a significant information war, with social media platforms like X, formerly known as Twitter playing an important role in shaping public perception. This article provides an analysis of tweets from propaganda accounts and trusted accounts collected from the onset of the war, February 2022 until the middle of May 2022 with n=40,000 total tweets. We utilise natural language processing and machine learning algorithms to assess the sentiment and identify key themes, topics and narratives across the dataset with human-in-the-loop (HITL) analysis throughout. Our findings indicate distinct strategies in how information is created, spread, and targeted at different audiences by both sides. Propaganda accounts frequently employ emotionally charged language and disinformation to evoke fear and distrust, whereas other accounts, primarily Western tend to focus on factual reporting and humanitarian aspects of the conflict. Clustering analysis reveals groups of accounts with similar behaviours, which we suspect indicates the presence of coordinated efforts. This research attempts to contribute to our understanding of the dynamics of information warfare and offers techniques for future studies on social media influence in military conflicts.


Ukrainians are looking past NATO to a European security architecture

Al Jazeera

Cambridge, United Kingdom – The fate of Ukraine and the future of European security hangs in the balance as United States and Russian diplomats prepared to discuss an accelerated peace plan this week. The uncertainty and dreadful possibilities of this historical moment, with Russia occupying a fifth of Ukrainian soil, dominated the atmosphere of Firewalling the Future, a conference on the future of Ukraine held at Cambridge University on Monday. Organised by programme leader Victoria Vdovychenko and professor of Ukrainian studies Rory Finnin under the auspices of the Centre for Geopolitics, it brought together Ukrainian, European and British diplomats, soldiers and academics. Dominant among the Ukrainians and Eastern Europeans present was the sentiment that with Trump's re-election, the international order is irrecoverably lost and needs to be rebuilt. Some spoke openly of a post-NATO reality in which Europe must form new structures and alliances to fend for itself.


BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response

Chen, Hongruixuan, Song, Jian, Dietrich, Olivier, Broni-Bediako, Clifford, Xuan, Weihao, Wang, Junjue, Shao, Xinlei, Wei, Yimin, Xia, Junshi, Lan, Cuiling, Schindler, Konrad, Yokoya, Naoto

arXiv.org Artificial Intelligence

Disaster events occur around the world and cause significant damage to human life and property. Earth observation (EO) data enables rapid and comprehensive building damage assessment (BDA), an essential capability in the aftermath of a disaster to reduce human casualties and to inform disaster relief efforts. Recent research focuses on the development of AI models to achieve accurate mapping of unseen disaster events, mostly using optical EO data. However, solutions based on optical data are limited to clear skies and daylight hours, preventing a prompt response to disasters. Integrating multimodal (MM) EO data, particularly the combination of optical and SAR imagery, makes it possible to provide all-weather, day-and-night disaster responses. Despite this potential, the development of robust multimodal AI models has been constrained by the lack of suitable benchmark datasets. In this paper, we present a BDA dataset using veRy-hIGH-resoluTion optical and SAR imagery (BRIGHT) to support AI-based all-weather disaster response. To the best of our knowledge, BRIGHT is the first open-access, globally distributed, event-diverse MM dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 12 regions worldwide, with a particular focus on developing countries where external assistance is most needed. The optical and SAR imagery in BRIGHT, with a spatial resolution between 0.3-1 meters, provides detailed representations of individual buildings, making it ideal for precise BDA. In our experiments, we have tested seven advanced AI models trained with our BRIGHT to validate the transferability and robustness. The dataset and code are available at https://github.com/ChenHongruixuan/BRIGHT. BRIGHT also serves as the official dataset for the 2025 IEEE GRSS Data Fusion Contest.


Quantifying Extreme Opinions on Reddit Amidst the 2023 Israeli-Palestinian Conflict

Guerra, Alessio, Lepre, Marcello, Karakus, Oktay

arXiv.org Artificial Intelligence

This study investigates the dynamics of extreme opinions on social media during the 2023 Israeli-Palestinian conflict, utilising a comprehensive dataset of over 450,000 posts from four Reddit subreddits (r/Palestine, r/Judaism, r/IsraelPalestine, and r/worldnews). A lexicon-based, unsupervised methodology was developed to measure "extreme opinions" by considering factors such as anger, polarity, and subjectivity. The analysis identifies significant peaks in extremism scores that correspond to pivotal real-life events, such as the IDF's bombings of Al Quds Hospital and the Jabalia Refugee Camp, and the end of a ceasefire following a terrorist attack. Additionally, this study explores the distribution and correlation of these scores across different subreddits and over time, providing insights into the propagation of polarised sentiments in response to conflict events. By examining the quantitative effects of each score on extremism and analysing word cloud similarities through Jaccard indices, the research offers a nuanced understanding of the factors driving extreme online opinions. This approach underscores the potential of social media analytics in capturing the complex interplay between real-world events and online discourse, while also highlighting the limitations and challenges of measuring extremism in social media contexts.