Goto

Collaborating Authors

 information operation


X-Troll: eXplainable Detection of State-Sponsored Information Operations Agents

Tian, Lin, Zhang, Xiuzhen, Kim, Maria Myung-Hee, Biggs, Jennifer, Rizoiu, Marian-Andrei

arXiv.org Artificial Intelligence

State-sponsored trolls, malicious actors who deploy sophisticated linguistic manipulation in coordinated information campaigns, posing threats to online discourse integrity. While Large Language Models (LLMs) achieve strong performance on general natural language processing (NLP) tasks, they struggle with subtle propaganda detection and operate as ``black boxes'', providing no interpretable insights into manipulation strategies. This paper introduces X-Troll, a novel framework that bridges this gap by integrating explainable adapter-based LLMs with expert-derived linguistic knowledge to detect state-sponsored trolls and provide human-readable explanations for its decisions. X-Troll incorporates appraisal theory and propaganda analysis through specialized LoRA adapters, using dynamic gating to capture campaign-specific discourse patterns in coordinated information operations. Experiments on real-world data demonstrate that our linguistically-informed approach shows strong performance compared with both general LLM baselines and existing troll detection models in accuracy while providing enhanced transparency through expert-grounded explanations that reveal the specific linguistic strategies used by state-sponsored actors. X-Troll source code is available at: https://github.com/ltian678/xtroll_source/.


IOHunter: Graph Foundation Model to Uncover Online Information Operations

Minici, Marco, Luceri, Luca, Fabbri, Francesco, Ferrara, Emilio

arXiv.org Artificial Intelligence

Social media platforms have become vital spaces for public discourse, serving as modern agor\'as where a wide range of voices influence societal narratives. However, their open nature also makes them vulnerable to exploitation by malicious actors, including state-sponsored entities, who can conduct information operations (IOs) to manipulate public opinion. The spread of misinformation, false news, and misleading claims threatens democratic processes and societal cohesion, making it crucial to develop methods for the timely detection of inauthentic activity to protect the integrity of online discourse. In this work, we introduce a methodology designed to identify users orchestrating information operations, a.k.a. \textit{IO drivers}, across various influence campaigns. Our framework, named \texttt{IOHunter}, leverages the combined strengths of Language Models and Graph Neural Networks to improve generalization in \emph{supervised}, \emph{scarcely-supervised}, and \emph{cross-IO} contexts. Our approach achieves state-of-the-art performance across multiple sets of IOs originating from six countries, significantly surpassing existing approaches. This research marks a step toward developing Graph Foundation Models specifically tailored for the task of IO detection on social media platforms.


Large Language Models Reveal Information Operation Goals, Tactics, and Narrative Frames

Burghardt, Keith, Chen, Kai, Lerman, Kristina

arXiv.org Artificial Intelligence

Adversarial information operations can destabilize societies by undermining fair elections, manipulating public opinions on policies, and promoting scams. Despite their widespread occurrence and potential impacts, our understanding of influence campaigns is limited by manual analysis of messages and subjective interpretation of their observable behavior. In this paper, we explore whether these limitations can be mitigated with large language models (LLMs), using GPT-3.5 as a case-study for coordinated campaign annotation. We first use GPT-3.5 to scrutinize 126 identified information operations spanning over a decade. We utilize a number of metrics to quantify the close (if imperfect) agreement between LLM and ground truth descriptions. We next extract coordinated campaigns from two large multilingual datasets from X (formerly Twitter) that respectively discuss the 2022 French election and 2023 Balikaran Philippine-U.S. military exercise in 2023. For each coordinated campaign, we use GPT-3.5 to analyze posts related to a specific concern and extract goals, tactics, and narrative frames, both before and after critical events (such as the date of an election). While the GPT-3.5 sometimes disagrees with subjective interpretation, its ability to summarize and interpret demonstrates LLMs' potential to extract higher-order indicators from text to provide a more complete picture of the information campaigns compared to previous methods.


Exposing Influence Campaigns in the Age of LLMs: A Behavioral-Based AI Approach to Detecting State-Sponsored Trolls

Ezzeddine, Fatima, Luceri, Luca, Ayoub, Omran, Sbeity, Ihab, Nogara, Gianluca, Ferrara, Emilio, Giordano, Silvia

arXiv.org Artificial Intelligence

The detection of state-sponsored trolls operating in influence campaigns on social media is a critical and unsolved challenge for the research community, which has significant implications beyond the online realm. To address this challenge, we propose a new AI-based solution that identifies troll accounts solely through behavioral cues associated with their sequences of sharing activity, encompassing both their actions and the feedback they receive from others. Our approach does not incorporate any textual content shared and consists of two steps: First, we leverage an LSTM-based classifier to determine whether account sequences belong to a state-sponsored troll or an organic, legitimate user. Second, we employ the classified sequences to calculate a metric named the "Troll Score", quantifying the degree to which an account exhibits troll-like behavior. To assess the effectiveness of our method, we examine its performance in the context of the 2016 Russian interference campaign during the U.S. Presidential election. Our experiments yield compelling results, demonstrating that our approach can identify account sequences with an AUC close to 99% and accurately differentiate between Russian trolls and organic users with an AUC of 91%. Notably, our behavioral-based approach holds a significant advantage in the ever-evolving landscape, where textual and linguistic properties can be easily mimicked by Large Language Models (LLMs): In contrast to existing language-based techniques, it relies on more challenging-to-replicate behavioral cues, ensuring greater resilience in identifying influence campaigns, especially given the potential increase in the usage of LLMs for generating inauthentic content. Finally, we assessed the generalizability of our solution to various entities driving different information operations and found promising results that will guide future research.


Joint Chiefs' Information Officer: U.S. Is Behind on Information Warfare. AI Can Help

#artificialintelligence

The United States needs a better strategy and more advanced tools for information operations, Lt. Gen. Dennis Crall, the Joint Staff's chief information officer, said Thursday. The government has become slower and less confident in its approach, a reticence it can't afford as artificial intelligence drastically increases the pace of messaging and information campaigns, said Crall, who is also the Joit Staff's director for command, control, communications, computers, and cyber. . "The speed at which machines and AI won some of these information campaigns changes the game drastically for us. If we study, if we're hesitant, if we don't have good left and right lateral limits, if every operation requires a new set of permissions...We're never going to compete." Crall made his remarks at the NDIA conference for Special Operations and Low Intensity Conflict, or SOLIC.


Deep Fakes And National Security – Analysis

#artificialintelligence

"Deep fakes"--a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies--could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms. Though definitions vary, deep fakes are most commonly described as forgeries created using techniques in machine learning (ML)--a subfield of AI--especially generative adversarial networks (GANs). In the GAN process, two ML systems called neural networks are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data--such as photos, audio recordings, or video footage--that replicate the properties of the original data set.


The State of AI Ethics Report (January 2021)

Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Khan, Falaah Arif, Heath, Victoria, Galinkin, Erick, Khurana, Ryan, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Sweidan, Masa, Akif, Mo, Butalid, Renjie

arXiv.org Artificial Intelligence

The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


AI system detects posts by foreign 'trolls' on Facebook and Twitter

#artificialintelligence

Foreign manipulation campaigns on social media can be spotted by looking at clues in the timing and length of posts and the URLs they contain, researchers have found. From the 2016 US presidential election to Brexit, a growing number of political events are thought to have been targeted by foreign activity on social media platforms such as Facebook, Twitter and Reddit. Now researchers say they have developed an automated machine learning system – a type of artificial intelligence – that can spot such posts, based on their content. "We can use machine learning to automatically identify the content of troll postings and track an online information operation without human intervention," said Dr Meysam Alizadeh, of Princeton University, a co-author of the research. The team say the approach differs from simply detecting bots, which they say is important since such campaigns often include posts by humans.


AI system detects posts by foreign 'trolls' on Facebook and Twitter

The Guardian

Foreign manipulation campaigns on social media can be spotted by looking at features such as the timing and length of posts, and the URLs they contain, researchers have found. From the 2016 US presidential election to Brexit, a growing number of major political events are thought to have been targeted by foreign activity on social media platforms such as Facebook, Twitter and Reddit. Now researchers say they have developed an automated machine learning system – a type of artificial intelligence – that can spot such posts, based on their content. "We can use machine learning to automatically identify the content of troll postings and track an online information operation without human intervention," said Dr Meysam Alizadeh of Princeton University, co-author of the research. The team say the approach differs from simply detecting bots, which they say is important since such campaigns often include posts by humans.


Anomaly Detection with Joint Representation Learning of Content and Connection

Wang, Junhao, Wang, Renhao, Kulshrestha, Aayushi, Rabbany, Reihaneh

arXiv.org Machine Learning

Social media sites are becoming a key factor in politics. These platforms are easy to manipulate for the purpose of distorting information space to confuse and distract voters. Past works to identify disruptive patterns are mostly focused on analyzing the content of tweets. In this study, we jointly embed the information from both user posted content as well as a user's follower network, to detect groups of densely connected users in an unsupervised fashion. We then investigate these dense sub-blocks of users to flag anomalous behavior. In our experiments, we study the tweets related to the upcoming 2019 Canadian Elections, and observe a set of densely-connected users engaging in local politics in different provinces, and exhibiting troll-like behavior.