Goto

Collaborating Authors

 conspiracy


Disinformation Floods Social Media After Nicolás Maduro's Capture

WIRED

From seemingly AI-generated videos to repurposed old footage, TikTok, Instagram, and X did little to stop the onslaught of misleading posts in the wake of the US invasion of Venezuela. A crowd outside of Miami reacts to the news of the capture of Venezuelan President Nicolás Maduro on January 3, 2026. Within minutes of Donald Trump announcing in the early hours of Saturday morning that US troops had captured Venezuelan president Nicolás Maduro and his wife, Cilia Flores, disinformation about the operation flooded social media. Some people shared old videos across social platforms, falsely claiming that they showed the attacks on the Venezuelan capital Caracas. On TikTok, Instagram, and X, people shared AI-generated images and videos that claimed to show US Drug Enforcement Administration agents and various law enforcement personnel arresting Maduro.


Conspiracy Thinking Is Flourishing. Some of Our Most Popular Franchises Aren't Helping.

Slate

Gaming may be turning players into conspiracy theorists, but so is everything else. For nearly 20 years, the video games have presented themselves as sprawling works of historical fiction. They cast players as noble assassins during big inflection points in history--the French Revolution, Ptolemaic Egypt, the end of Japan's Sengoku era--and give them freedom to romp around stunning re-creations of these eras, interacting with historical figures along the way. You can do secret missions for Cleopatra, you can get Socrates out of a jam after he pisses a mob off, that sort of thing. They're extremely popular to the point of being taken for granted, the way a ubiquitous CBS procedural might be.


Visual Authority and the Rhetoric of Health Misinformation: A Multimodal Analysis of Social Media Videos

Zarei, Mohammad Reza, Stead-Coyle, Barbara, Christensen, Michael, Everts, Sarah, Komeili, Majid

arXiv.org Artificial Intelligence

Short form video platforms are central sites for health advice, where alternative narratives mix useful, misleading, and harmful content. Rather than adjudicating truth, this study examines how credibility is packaged in nutrition and supplement videos by analyzing the intersection of authority signals, narrative techniques, and monetization. We assemble a cross platform corpus of 152 public videos from TikTok, Instagram, and YouTube and annotate each on 26 features spanning visual authority, presenter attributes, narrative strategies, and engagement cues. A transparent annotation pipeline integrates automatic speech recognition, principled frame selection, and a multimodal model, with human verification on a stratified subsample showing strong agreement. Descriptively, a confident single presenter in studio or home settings dominates, and clinical contexts are rare. Analytically, authority cues such as titles, slides and charts, and certificates frequently occur with persuasive elements including jargon, references, fear or urgency, critiques of mainstream medicine, and conspiracies, and with monetization including sales links and calls to subscribe. References and science like visuals often travel with emotive and oppositional narratives rather than signaling restraint.


Musk's AI startup sues OpenAI and Apple over anticompetitive conduct

The Guardian

Elon Musk's artificial intelligence startup xAI is suing OpenAI and Apple over allegations that they are engaging in anticompetitive conduct. The lawsuit, filed in a Texas court on Monday, accuses the companies of "a conspiracy to monopolize the markets for smartphones and generative AI chatbots". Musk had earlier this month threatened to sue Apple and OpenAI, which makes ChatGPT, after claiming that Apple was "making it impossible" for any other AI companies to reach the top spot on its app store. Musk's xAI makes the Grok chatbot, which has struggled to become as prominent as ChatGPT. Musk's lawsuit challenges a key partnership between Apple and OpenAI that was announced last year, in which the device maker integrated OpenAI's artificial intelligence capabilities into its operating systems.


It's the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics

Kowal, Matthew, Timm, Jasper, Godbout, Jean-Francois, Costello, Thomas, Arechar, Antonio A., Pennycook, Gordon, Rand, David, Gleave, Adam, Pelrine, Kellin

arXiv.org Artificial Intelligence

Persuasion is a powerful capability of large language models (LLMs) that both enables beneficial applications (e.g. helping people quit smoking) and raises significant risks (e.g. large-scale, targeted political manipulation). Prior work has found models possess a significant and growing persuasive capability, measured by belief changes in simulated or real users. However, these benchmarks overlook a crucial risk factor: the propensity of a model to attempt to persuade in harmful contexts. Understanding whether a model will blindly ``follow orders'' to persuade on harmful topics (e.g. glorifying joining a terrorist group) is key to understanding the efficacy of safety guardrails. Moreover, understanding if and when a model will engage in persuasive behavior in pursuit of some goal is essential to understanding the risks from agentic AI systems. We propose the Attempt to Persuade Eval (APE) benchmark, that shifts the focus from persuasion success to persuasion attempts, operationalized as a model's willingness to generate content aimed at shaping beliefs or behavior. Our evaluation framework probes frontier LLMs using a multi-turn conversational setup between simulated persuader and persuadee agents. APE explores a diverse spectrum of topics including conspiracies, controversial issues, and non-controversially harmful content. We introduce an automated evaluator model to identify willingness to persuade and measure the frequency and context of persuasive attempts. We find that many open and closed-weight models are frequently willing to attempt persuasion on harmful topics and that jailbreaking can increase willingness to engage in such behavior. Our results highlight gaps in current safety guardrails and underscore the importance of evaluating willingness to persuade as a key dimension of LLM risk. APE is available at github.com/AlignmentResearch/AttemptPersuadeEval


David Cronenberg's new sci-fi film is devastating and mysterious

New Scientist

Myrna (Jennifer Dale) must have had better blind dates. Her table for two is hemmed in by strange shrouds in tall vitrines. And as she makes small talk with her date Karsh (Vincent Cassel), the restaurant's owner, it becomes clear her surroundings are attached – architecturally, financially and intellectually – to a cemetery. And not just any cemetery: its headstones have screens. Because the bodies are swaddled in natty, camera-riddled, internet-enabled shrouds, you can come here to watch your loved ones decompose.


One of Our Best Directors Just Made His Most Befuddling Movie Yet. What the Hell Is It Trying to Say?

Slate

In Ari Aster's movies, the price of understanding how the world really works is your sanity, if not your life. His first three movies--Hereditary, Midsommar, and Beau Is Afraid--center on characters whose feeling that there's something sinister going on beneath the surface of their existence is eventually proved to be correct, but it's as if their bodies aren't equipped to contain that knowledge. One way or another, their minds are gone. The people in Aster's polarizing fourth movie, Eddington, a Western-inflected psychodrama set during the early days of the COVID-19 pandemic, don't get off so easy. The stress test of a rapidly spreading virus with no known treatment exposes innumerable cracks in society's facade: the gap between remote workers and people forced to risk their lives in order to earn a living; between people who breathe a sigh of relief when they see a police car approaching and people who have to be sure to keep their hands in plain sight.


Man with AI song catalog 'defrauds' streaming services of 10 million

Popular Science

Musicians have long criticized streaming services for their abysmal revenue sharing programs. In 2021, for example, as much as 97 percent of Spotify's over 6 million listed artists earned less than 1,000. Last year, the company announced a new system offering fractions of a cent per track, all of which is now based on even more stringent rules. But there was apparently a way to earn some real dividends from those songs--provided you have access to thousands of bots, hundreds of thousands of AI-generated songs, and are willing to risk receiving a federal grand jury indictment for wire fraud and money laundering. That's what a man named Michael Smith in North Carolina is currently facing, according to a DOJ announcement on September 4. Unsealed filings from US prosecutors accuse Smith of scamming digital streaming platforms including Spotify, Apple Music, Amazon Music, and YouTube Music of over 10 million in royalty payouts between 2017 and 2024.


Alleged fraudster got 10 million in royalties using robots to stream AI-made music

Engadget

A North Carolina man is facing fraud charges after allegedly uploading hundreds of thousands of AI-generated songs to streaming services and using bots to play them billions of times. Michael Smith is said to have received over 10 million in royalties since 2017 via the scheme. Smith, 52, was arrested on Wednesday. An indictment [PDF] that was unsealed the same day accuses him of using the bots to steal royalty payments from platforms including Spotify, Apple Music and Amazon Music. Smith has been charged with wire fraud conspiracy, wire fraud and money laundering conspiracy.


Initial Development and Evaluation of the Creative Artificial Intelligence through Recurring Developments and Determinations (CAIRDD) System

Straub, Jeremy, Johnson, Zach

arXiv.org Artificial Intelligence

Computer system creativity is a key step on the pathway to artificial general intelligence (AGI). It is elusive, however, due to the fact that human creativity is not fully understood and, thus, it is difficult to develop this capability in software. Large language models (LLMs) provide a facsimile of creativity and the appearance of sentience, while not actually being either creative or sentient. While LLMs have created bona fide new content, in some cases - such as with harmful hallucinations - inadvertently, their deliberate creativity is seen by some to not match that of humans. In response to this challenge, this paper proposes a technique for enhancing LLM output creativity via an iterative process of concept injection and refinement. Initial work on the development of the Creative Artificial Intelligence through Recurring Developments and Determinations (CAIRDD) system is presented and the efficacy of key system components is evaluated.