misinformation
- North America > United States > Vermont (0.05)
- Europe > United Kingdom > England (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
- (3 more...)
- North America > United States > Texas (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > India > Maharashtra > Mumbai (0.04)
- Media > News (0.57)
- Information Technology > Services (0.54)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- North America > Canada (0.04)
- North America > United States > California (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New Mexico > Santa Fe County > Santa Fe (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Therapeutic Area > Vaccines (0.95)
- Media (0.78)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.70)
Elon Musk's Grok AI generates images of 'minors in minimal clothing'
Grok has a history of failing to maintain its safety guardrails and posting misinformation. Grok has a history of failing to maintain its safety guardrails and posting misinformation. Elon Musk's Grok AI generates images of'minors in minimal clothing' Elon Musk's chatbot Grok posted on Friday that lapses in safeguards had led it to generate "images depicting minors in minimal clothing" on social media platform X. The chatbot, a product of Musk's company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts. Screenshots shared by users on X showed Grok's public media tab filled with such images.
- North America > United States (0.75)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
- Africa > South Africa (0.05)
- Leisure & Entertainment > Sports (0.75)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.75)
- Government > Regional Government > North America Government > United States Government (0.75)
- Media > News (0.73)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.48)
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs' susceptibility to data poisoning attacks that can manipulate responses to innocuous, everyday prompts. We introduce Shadowcast, a stealthy data poisoning attack where poison samples are visually indistinguishable from benign images with matching texts. Shadowcast demonstrates effectiveness in two attack types. The first is a traditional Label Attack, tricking VLMs into misidentifying class labels, such as confusing Donald Trump for Joe Biden.
Like in past disasters, misinformation spreads online in Aomori quake aftermath
A damaged concrete pillar supporting the Hachinohe Line in the city of Hachinohe, Aomori Prefecture, on Wednesday. False claims that a powerful earthquake in northern Japan was "human-caused," along with artificial intelligence-generated videos, are spreading rapidly across social media after the quake struck Aomori Prefecture on Monday evening. The earthquake registered an upper 6 on Japan's seismic intensity scale, prompting warnings from the Japan Meteorological Agency (JMA) and the Cabinet Secretariat against the spread of unverified information that could hamper emergency response efforts. Misinformation circulated widely on platforms including X, echoing a pattern seen during previous disasters such as the Noto Peninsula earthquake in January 2024, when false rescue pleas and conspiracy theories also gained traction online. In a time of both misinformation and too much information, quality journalism is more crucial than ever.
- Asia > Japan > Honshū > Tōhoku > Aomori Prefecture > Aomori (0.85)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.07)
- Asia > China (0.07)
- (7 more...)
Simulating Misinformation Propagation in Social Networks using Large Language Models
Maurya, Raj Gaurav, Shukla, Vaibhav, Dandekar, Raj Abhijit, Dandekar, Rajat, Panat, Sreedath
Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases. To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust heuristics. Within this setup, we introduce an auditor--node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents. News articles are propagated across networks of persona-conditioned LLM nodes, each rewriting received content. A question--answering-based auditor then measures factual fidelity at every step, offering interpretable, claim-level tracking of misinformation drift. We formalize a misinformation index and a misinformation propagation rate to quantify factual degradation across homogeneous and heterogeneous branches of up to 30 sequential rewrites. Experiments with 21 personas across 10 domains reveal that identity- and ideology-based personas act as misinformation accelerators, especially in politics, marketing, and technology. By contrast, expert-driven personas preserve factual stability. Controlled-random branch simulations further show that once early distortions emerge, heterogeneous persona interactions rapidly escalate misinformation to propaganda-level distortion. Our taxonomy of misinformation severity -- spanning factual errors, lies, and propaganda -- connects observed drift to established theories in misinformation studies. These findings demonstrate the dual role of LLMs as both proxies for human-like biases and as auditors capable of tracing information fidelity. The proposed framework provides an interpretable, empirically grounded approach for studying, simulating, and mitigating misinformation diffusion in digital ecosystems.
- Asia > Indonesia (0.04)
- Europe > Middle East (0.04)
- Asia > China (0.04)
- (9 more...)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Pooling Attention: Evaluating Pretrained Transformer Embeddings for Deception Classification
Mamtani, Sumit, Bhure, Abhijeet
Abstract--This paper investigates fake news detection as a downstream evaluation of Transformer representations, bench-marking encoder-only and decoder-only pre-trained models (BERT, GPT -2, Transformer-XL) as frozen embedders paired with lightweight classifiers. Through controlled preprocessing comparing pooling versus padding and neural versus linear heads, results demonstrate that contextual self-attention encodings consistently transfer effectively. BERT embeddings combined with logistic regression outperform neural baselines on LIAR dataset splits, while analyses of sequence length and aggregation reveal robustness to truncation and advantages from simple max or average pooling. In the pre-digital era, the dissemination of information to mass audiences was predominantly controlled by established publishing organizations and media conglomerates that maintained editorial standards and fact-checking processes. The advent of the Internet and the subsequent proliferation of social media platforms have fundamentally transformed this landscape, democratizing information sharing by enabling any individual to broadcast news and content to global audiences with unprecedented speed and scale [6]. While this democratization has fostered greater accessibility to diverse perspectives, it has simultaneously introduced significant challenges to ensuring the validity, authenticity, and reliability of the information being circulated [8].
- North America > United States (0.14)
- Asia > Japan (0.04)
Insight-A: Attribution-aware for Multimodal Misinformation Detection
Wu, Junjie, Fu, Yumeng, Gong, Chen, Fu, Guohong
AI-generated content (AIGC) technology has emerged as a prevalent alternative to create multimodal misinformation on social media platforms, posing unprecedented threats to societal safety. However, standard prompting leverages multimodal large language models (MLLMs) to identify the emerging misinformation, which ignores the misinformation attribution. To this end, we present Insight-A, exploring attribution with MLLM insights for detecting multimodal misinformation. Insight-A makes two efforts: I) attribute misinformation to forgery sources, and II) an effective pipeline with hierarchical reasoning that detects distortions across modalities. Specifically, to attribute misinformation to forgery traces based on generation patterns, we devise cross-attribution prompting (CAP) to model the sophisticated correlations between perception and reasoning. Meanwhile, to reduce the subjectivity of human-annotated prompts, automatic attribution-debiased prompting (ADP) is used for task adaptation on MLLMs. Additionally, we design image captioning (IC) to achieve visual details for enhancing cross-modal consistency checking. Extensive experiments demonstrate the superiority of our proposal and provide a new paradigm for multimodal misinformation detection in the era of AIGC.
- Europe > Austria > Vienna (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (6 more...)