propagandist
The Download: AI propaganda, and digital twins
Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality. At the end of May, OpenAI marked a new "first" in its corporate history. It wasn't an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists--including players from Russia, China, Iran, and Israel--using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output.
- Europe > Russia (0.26)
- Asia > Russia (0.26)
- Asia > Middle East > Israel (0.26)
- (2 more...)
- Government (0.54)
- Media (0.52)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.81)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.49)
Propagandists are using AI too--and companies need to be open about it
At the end of May, OpenAI marked a new "first" in its corporate history. It wasn't an even more powerful language model or a new data partnership, but a report disclosing that bad actors had misused their products to run influence operations. The company had caught five networks of covert propagandists--including players from Russia, China, Iran, and Israel--using their generative AI tools for deceptive tactics that ranged from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts. The use of these tools, OpenAI noted, seemed intended to improve the quality and quantity of output. AI gives propagandists a productivity boost too.
- Europe > Russia (0.32)
- Asia > Russia (0.32)
- Asia > Middle East > Israel (0.28)
- (2 more...)
A Cost Analysis of Generative Language Models and Influence Operations
Despite speculation that recent large language models (LLMs) are likely to be used maliciously to improve the quality or scale of influence operations, uncertainty persists regarding the economic value that LLMs offer propagandists. This research constructs a model of costs facing propagandists for content generation at scale and analyzes (1) the potential savings that LLMs could offer propagandists, (2) the potential deterrent effect of monitoring controls on API-accessible LLMs, and (3) the optimal strategy for propagandists choosing between multiple private and/or open source LLMs when conducting influence operations. Primary results suggest that LLMs need only produce usable outputs with relatively low reliability (roughly 25%) to offer cost savings to propagandists, that the potential reduction in content generation costs can be quite high (up to 70% for a highly reliable model), and that monitoring capabilities have sharply limited cost imposition effects when alternative open source models are available. In addition, these results suggest that nation-states -- even those conducting many large-scale influence operations per year -- are unlikely to benefit economically from training custom LLMs specifically for use in influence operations.
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services (0.93)
- Government > Regional Government > Asia Government (0.68)
Analyzing the Strategy of Propaganda using Inverse Reinforcement Learning: Evidence from the 2022 Russian Invasion of Ukraine
Geissler, Dominique, Feuerriegel, Stefan
The 2022 Russian invasion of Ukraine was accompanied by a large-scale, pro-Russian propaganda campaign on social media. However, the strategy behind the dissemination of propaganda has remained unclear, particularly how the online discourse was strategically shaped by the propagandists' community. Here, we analyze the strategy of the Twitter community using an inverse reinforcement learning (IRL) approach. Specifically, IRL allows us to model online behavior as a Markov decision process, where the goal is to infer the underlying reward structure that guides propagandists when interacting with users with a supporting or opposing stance toward the invasion. Thereby, we aim to understand empirically whether and how between-user interactions are strategically used to promote the proliferation of Russian propaganda. For this, we leverage a large-scale dataset with 349,455 posts with pro-Russian propaganda from 132,131 users. We show that bots and humans follow a different strategy: bots respond predominantly to pro-invasion messages, suggesting that they seek to drive virality; while messages indicating opposition primarily elicit responses from humans, suggesting that they tend to engage in critical discussions. To the best of our knowledge, this is the first study analyzing the strategy behind propaganda from the 2022 Russian invasion of Ukraine through the lens of IRL.
- Asia > Russia (1.00)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Russia (0.05)
- (7 more...)
- Government > Regional Government > Europe Government > Russia Government (1.00)
- Government > Regional Government > Asia Government > Russia Government (1.00)
- Government > Military (1.00)
The Internet-Warping Power of 'Synthetic Histories'
History has long been a theater of war, the past serving as a proxy in conflicts over the present. Ron DeSantis is warping history by banning books on racism from Florida's schools; people remain divided about the right approach to repatriating Indigenous objects and remains; the Pentagon Papers were an attempt to twist narratives about the Vietnam War. The Nazis seized power in part by manipulating the past--they used propaganda about the burning of the Reichstag, the German parliament building, to justify persecuting political rivals and assuming dictatorial authority. That specific example weighs on Eric Horvitz, Microsoft's chief scientific officer and a leading AI researcher, who tells me that the apparent AI revolution could not only provide a new weapon to propagandists, as social media did earlier this century, but entirely reshape the historiographic terrain, perhaps laying the groundwork for a modern-day Reichstag fire. These are powerful and easy-to-use programs that produce synthetic text, images, video, and audio, all of which can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation.
- Asia > Vietnam (0.25)
- Europe > Ukraine (0.15)
- North America > United States > North Carolina (0.05)
- (3 more...)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- (2 more...)
Forecasting Potential Misuses of Language Models for Disinformation Campaigns--and How to Reduce Risk
OpenAI researchers collaborated with Georgetown University's Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science. But, as with any new technology, it is worth considering how they can be misused.
- Media > News (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.74)
How To Use AI To Fight Propaganda
"All media exist to invest our lives with artificial perceptions and arbitrary values." The word disinformation is a cognate for the Russian dezinformatsia, the name of a KGB division devoted to propaganda. The world has come a long way from the Stalin days: USSR split, KGB became extinct, the Berlin Wall came down, and Fukuyama announced the End of History. Once the cold war ended, we thought the worst was over. Except, propaganda is a perpetual motion machine.
- Information Technology > Services (0.76)
- Media > News (0.70)
- Government > Regional Government > Asia Government > India Government (0.34)
Multi-modal Identification of State-Sponsored Propaganda on Social Media
Guo, Xiaobo, Vosoughi, Soroush
The prevalence of state-sponsored propaganda on the Internet has become a cause for concern in the recent years. While much effort has been made to identify state-sponsored Internet propaganda, the problem remains far from being solved because the ambiguous definition of propaganda leads to unreliable data labelling, and the huge amount of potential predictive features causes the models to be inexplicable. This paper is the first attempt to build a balanced dataset for this task. The dataset is comprised of propaganda by three different organizations across two time periods. A multi-model framework for detecting propaganda messages solely based on the visual and textual content is proposed which achieves a promising performance on detecting propaganda by the three organizations both for the same time period (training and testing on data from the same time period) (F1=0.869) and for different time periods (training on past, testing on future) (F1=0.697). To reduce the influence of false positive predictions, we change the threshold to test the relationship between the false positive and true positive rates and provide explanations for the predictions made by our models with visualization tools to enhance the interpretability of our framework. Our new dataset and general framework provide a strong benchmark for the task of identifying state-sponsored Internet propaganda and point out a potential path for future work on this task.
- Asia > Russia (0.28)
- North America > United States > New Hampshire > Grafton County > Hanover (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Media > News (0.68)
- Government > Regional Government (0.46)
Don't Worry About Deepfakes. Worry About Why People Fall for Them.
The use of AI to create high-resolution fake images and videos has raised concerns about the use of disinformation as a political tool. Over the past few years, the application of artificial intelligence to create faked images, audio, and video has sparked a great deal of concern among policymakers and researchers. A series of compelling demonstrations -- the use of AI to create believable synthetic voices, to imitate the facial movements of a president, to swap faces in faked porn -- illustrate the rapid speed at which the technology is advancing. Machine learning, the subfield of artificial intelligence that underlies much of the technology's modern progress, studies algorithms that improve through the processing of data. Machine learning systems acquire what is known in the field as a representation, a concept of the task to be solved, which can then be used to generate new iterations of the thing that has been learned.
- Government (1.00)
- Information Technology > Security & Privacy (0.83)
Online Conspiracy Theories: The WIRED Guide
It's how we've always made sense of the world: Our ancestors wouldn't have survived if they hadn't realized that plants tend to flourish after rainfall or that sabertooth tigers tended to eat them. But sometimes we're just a little too good at finding meaning in the noise, occasionally unable to separate real patterns from those of our own imagining. These days, your pattern matching skills will help you find Waldo, but they are also why celebrities' faces keep popping up on tortillas. At their most paranoid and byzantine, these pattern-matching misfires are called conspiracy theories: unfounded, deeply held alternative explanations for how things are--often invoking some shadowy, malevolent force masterminding the coverup. It's an unfounded, deeply held alternative explanations for how things are--often invoking some shadowy, malevolent force masterminding the coverup. Conspiracy theories thrive on the internet, but that's certainly not where they were born. The Flat Earth Society has existed since the 1800s, and people have been speculating about which people are secretly living or dead at least since 68 AD, when Romans weren't convinced their arsonist emperor Nero had actually committed suicide. But conspiracies and the digital world do mesh well, probably because they scratch similar itches in our not-quite-domesticated psyches.