Goto

Collaborating Authors

 propaganda


Trump Doesn't Need the Proud Boys Anymore

WIRED

In a world where ICE agents are shooting US citizens on the street, the need for militias and extremist groups like the Proud Boys to support far-right interests has evaporated. Whether it was protesting Covid lockdowns, attending school board meetings, or facing off against Black Lives Matter protesters, the far-right Proud Boys were always on hand to support Donald Trump's first term in office. When Trump left office in 2021, the group's leaders languished in jail for their role in the January 6 attack on the Capitol. With reported infighting destabilizing the movement, it looked like the group's glory days were behind it. But Trump's return a year ago, and his release of all January 6 prisoners, signaled that a Proud Boy comeback could be in the cards.


This year we were drowning in a sea of slick, nonsensical AI slop

New Scientist

There is no doubt that 2025 will be remembered as the year of slop. A popular term for incorrect, weird and often downright ugly AI-generated content, slop has rotted nearly every platform on the internet. Enough slop has accumulated over the past few years that scientists can now measure its effects on people over time. Researchers at the Massachusetts Institute of Technology found that people using large language models (LLMs) such as those behind ChatGPT to write essays show far less brain activity than those who don't. And then there are the potential ill-effects on our mental health, with reports that certain chatbots are encouraging people to believe in fantasies or conspiracies, as well as urging them to self-harm, and that they may trigger or worsen psychosis.


Simulating Misinformation Propagation in Social Networks using Large Language Models

Maurya, Raj Gaurav, Shukla, Vaibhav, Dandekar, Raj Abhijit, Dandekar, Rajat, Panat, Sreedath

arXiv.org Artificial Intelligence

Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases. To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust heuristics. Within this setup, we introduce an auditor--node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents. News articles are propagated across networks of persona-conditioned LLM nodes, each rewriting received content. A question--answering-based auditor then measures factual fidelity at every step, offering interpretable, claim-level tracking of misinformation drift. We formalize a misinformation index and a misinformation propagation rate to quantify factual degradation across homogeneous and heterogeneous branches of up to 30 sequential rewrites. Experiments with 21 personas across 10 domains reveal that identity- and ideology-based personas act as misinformation accelerators, especially in politics, marketing, and technology. By contrast, expert-driven personas preserve factual stability. Controlled-random branch simulations further show that once early distortions emerge, heterogeneous persona interactions rapidly escalate misinformation to propaganda-level distortion. Our taxonomy of misinformation severity -- spanning factual errors, lies, and propaganda -- connects observed drift to established theories in misinformation studies. These findings demonstrate the dual role of LLMs as both proxies for human-like biases and as auditors capable of tracing information fidelity. The proposed framework provides an interpretable, empirically grounded approach for studying, simulating, and mitigating misinformation diffusion in digital ecosystems.


Fine-grained Narrative Classification in Biased News Articles

Afroz, Zeba, Vardhan, Harsh, Bhakuni, Pawan, Punia, Aanchal, Kumar, Rajdeep, Akhtar, Md. Shad

arXiv.org Artificial Intelligence

Narratives are the cognitive and emotional scaffolds of propaganda. They organize isolated persuasive techniques into coherent stories that justify actions, attribute blame, and evoke identification with ideological camps. In this paper, we propose a novel fine-grained narrative classification in biased news articles. We also explore article-bias classification as the precursor task to narrative classification and fine-grained persuasive technique identification. We develop INDI-PROP, the first ideologically grounded fine-grained narrative dataset with multi-level annotation for analyzing propaganda in Indian news media. Our dataset INDI-PROP comprises 1,266 articles focusing on two polarizing socio-political events in recent times: CAA and the Farmers' protest. Each article is annotated at three hierarchical levels: (i) ideological article-bias (pro-government, pro-opposition, neutral), (ii) event-specific fine-grained narrative frames anchored in ideological polarity and communicative intent, and (iii) persuasive techniques. We propose FANTA and TPTC, two GPT-4o-mini guided multi-hop prompt-based reasoning frameworks for the bias, narrative, and persuasive technique classification. FANTA leverages multi-layered communicative phenomena by integrating information extraction and contextual framing for hierarchical reasoning. On the other hand, TPTC adopts systematic decomposition of persuasive cues via a two-stage approach. Our evaluation suggests substantial improvement over underlying baselines in each case.



timely (R2, R3) and important

Neural Information Processing Systems

We thank the reviewers for their helpful comments. Reviewers noted that Grover generates "extremely credible" articles (R2) and that due We appreciate this point and will revisit the word choice. We haven't seen the model We believe that our "novel way to guide generation" makes Grover novel, not just an Indeed, GPT(2), BERT, XLnet, and Grover share the same backbone but learn from different objectives. What is given to the turkers? For overall trustworthiness for instance, we asked "Does the article read like it comes "It takes a thief to catch a thief"?


Elon Musk's Grokipedia Pushes Far-Right Talking Points

WIRED

The new AI-powered Wikipedia competitor falsely claims that pornography worsened the AIDS epidemic and that social media may be fueling a rise in transgender people. On Monday, Elon Musk's xAI startup launched Grokipedia, which the billionaire is pitching as an AI-generated alternative to the crowdsourced encyclopedia Wikipedia. Musk first announced the project in late September on his social media platform X, saying it would be "a massive improvement over Wikipedia," and "a necessary step towards the xAI goal of understanding the Universe." Musk said last week that he had delayed the launch of Grokipedia because his team needed "to do more work to purge out the propaganda." When Grokipedia eventually dropped on Monday, WIRED was initially unable to access the website and received an automated message that it was blocked.


Chatbots Are Pushing Sanctioned Russian Propaganda

WIRED

ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds. OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok are pushing Russian state propaganda from sanctioned entities--including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives--when asked about the war against Ukraine, according to a new report. Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids --where searches for real-time data provide few results from legitimate sources--to promote false and misleading information. Almost one-fifth of responses to questions about Russia's war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims. "It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU," says Pablo Maristany de las Casas, an analyst at the ISD who led the research.



timely (R2, R3) and important

Neural Information Processing Systems

We thank the reviewers for their helpful comments. Reviewers noted that Grover generates "extremely credible" articles (R2) and that due We appreciate this point and will revisit the word choice. We haven't seen the model We believe that our "novel way to guide generation" makes Grover novel, not just an Indeed, GPT(2), BERT, XLnet, and Grover share the same backbone but learn from different objectives. What is given to the turkers? For overall trustworthiness for instance, we asked "Does the article read like it comes "It takes a thief to catch a thief"?