Goto

Collaborating Authors

 extremism


Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization

Lahnala, Allison, Varadarajan, Vasudha, Flek, Lucie, Schwartz, H. Andrew, Boyd, Ryan L.

arXiv.org Artificial Intelligence

The proliferation of ideological movements into extremist factions via social media has become a global concern. While radicalization has been studied extensively within the context of specific ideologies, our ability to accurately characterize extremism in more generalizable terms remains underdeveloped. In this paper, we propose a novel method for extracting and analyzing extremist discourse across a range of online community forums. By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels. Our research identifies 11 distinct factors, which we term ``The Extremist Eleven,'' as a generalized psychosocial model of extremism. Applying our method to various online communities, we demonstrate an ability to characterize ideologically diverse communities across the 11 extremist traits. We demonstrate the power of this method by analyzing user histories from members of the incel community. We find that our framework accurately predicts which users join the incel community up to 10 months before their actual entry with an AUC of $>0.6$, steadily increasing to AUC ~0.9 three to four months before the event. Further, we find that upon entry into an extremist forum, the users tend to maintain their level of extremism within the community, while still remaining distinguishable from the general online discourse. Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach that transcends traditional, trait-specific models.


Want AI that flags hateful content? Build it.

MIT Technology Review

The challenge asks for two different models. The first, a task for those with intermediate skills, is one that identifies hateful images; the second, considered an advanced challenge, is a model that attempts to fool the first one. "That actually mimics how it works in the real world," says Chowdhury. "The do-gooders make one approach, and then the bad guys make an approach." The goal is to engage machine-learning researchers on the topic of mitigating extremism, which may lead to the creation of new models that can effectively screen for hateful images.


Assessing Large Language Models for Online Extremism Research: Identification, Explanation, and New Knowledge

Dong, Beidi, Lee, Jin R., Zhu, Ziwei, Srinivasan, Balassubramanian

arXiv.org Artificial Intelligence

The United States has experienced a significant increase in violent extremism, prompting the need for automated tools to detect and limit the spread of extremist ideology online. This study evaluates the performance of Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformers (GPT) in detecting and classifying online domestic extremist posts. We collected social media posts containing "far-right" and "far-left" ideological keywords and manually labeled them as extremist or non-extremist. Extremist posts were further classified into one or more of five contributing elements of extremism based on a working definitional framework. The BERT model's performance was evaluated based on training data size and knowledge transfer between categories. We also compared the performance of GPT 3.5 and GPT 4 models using different prompts: na\"ive, layperson-definition, role-playing, and professional-definition. Results showed that the best performing GPT models outperformed the best performing BERT models, with more detailed prompts generally yielding better results. However, overly complex prompts may impair performance. Different versions of GPT have unique sensitives to what they consider extremist. GPT 3.5 performed better at classifying far-left extremist posts, while GPT 4 performed better at classifying far-right extremist posts. Large language models, represented by GPT models, hold significant potential for online extremism classification tasks, surpassing traditional BERT models in a zero-shot setting. Future research should explore human-computer interactions in optimizing GPT models for extremist detection and classification tasks to develop more efficient (e.g., quicker, less effort) and effective (e.g., fewer errors or mistakes) methods for identifying extremist content.


A Lexicon for Studying Radicalization in Incel Communities

Klein, Emily, Golbeck, Jennifer

arXiv.org Artificial Intelligence

Incels are an extremist online community of men who believe in an ideology rooted in misogyny, racism, the glorification of violence, and dehumanization. In their online forums, they use an extensive, evolving cryptolect - a set of ingroup terms that have meaning within the group, reflect the ideology, demonstrate membership in the community, and are difficult for outsiders to understand. This paper presents a lexicon with terms and definitions for common incel root words, prefixes, and affixes. The lexicon is text-based for use in automated analysis and is derived via a Qualitative Content Analysis of the most frequent incel words, their structure, and their meaning on five of the most active incel communities from 2016 to 2023.


'Capitalism is dead. Now we have something much worse': Yanis Varoufakis on extremism, Starmer, and the tyranny of big tech

The Guardian

What could be more delightful than a trip to Greece to meet Yanis Varoufakis, the charismatic leftwing firebrand who tried to stick it to the man, AKA the IMF, EU and entire global financial order? The mental imagery I have before the visit is roughly two parts Zorba the Greek to one part an episode of BBC series Holiday from the Jill Dando era: blue skies, blue sea, maybe some plate breaking in a jolly taverna. What I'm not expecting is a wall of flames rippling across a hillside next to the highway from the airport and a plume of black smoke billowing across the carriageway. Because even a modernist villa on a hillside on the island of Aegina – a fast ferry ride from the port of Piraeus and the summer bolthole of chic Athenians – is not the sanctuary from the modern world that it might once have been. The house is where Varoufakis and his wife, landscape artist Danae Stratou, live, year round since the pandemic, but in August 2023 at the end of a summer of heatwaves and extreme weather conditions across the world, it feels more than a little apocalyptic. The sun is a dim orange orb struggling to shine through a haze of smoke while a shower of fine ash falls invisibly from the sky.


China's Great Firewall Came for AI Chatbots, and Experts Are Worried

#artificialintelligence

China's top digital regulator proposed bold new guidelines this week that prohibit ChatGPT-style large language models from spitting out content believed to subvert state power or advocate for the overthrow of the country's communist political system. Experts speaking with Gizmodo said the new guidelines mark the clearest signs yet of Chinese authorities' eagerness to extend its hardline online censorship apparatus to the emerging world of generative artificial intelligence. "We should be under no illusions. The Party will wield the new Generative AI Guidelines to carry out the same function of censorship, surveillance, and information manipulation it has sought to justify under other laws and regulations," Michael Caster, Asia Digital Programme Manager for Article 19, a human rights organization focused on online free expression, told Gizmodo. The draft guidelines, published by the Cyberspace Administration of China, come hot on the heels of new generative AI products from Baidu, Alibaba, and other Chinese tech giants.


Agent mental models and Bayesian rules as a tool to create opinion dynamics models

Martins, Andre C. R.

arXiv.org Artificial Intelligence

Traditional models of opinion dynamics provide a simple approach to understanding human behavior in basic social scenarios. However, when it comes to issues such as polarization and extremism, we require a more nuanced understanding of human biases and cognitive tendencies. In this paper, we propose an approach to modeling opinion dynamics by integrating mental models and assumptions of individuals agents using Bayesian-inspired methods. By exploring the relationship between human rationality and Bayesian theory, we demonstrate the efficacy of these methods in describing how opinions evolve. Our analysis leverages the Continuous Opinions and Discrete Actions (CODA) model, applying Bayesian-inspired rules to account for key human behaviors such as confirmation bias, motivated reasoning, and our reluctance to change opinions. Through this, we obtain update rules that offer deeper insights into the dynamics of extreme opinions. Our work sheds light on the role of human biases in shaping opinion dynamics and highlights the potential of Bayesian-inspired modeling to provide more accurate predictions of real-world scenarios. Keywords: Opinion dynamics, Bayesian methods, Cognition, CODA, Agent-based models


Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech

Govers, Jarod, Feldman, Philip, Dant, Aaron, Patros, Panos

arXiv.org Artificial Intelligence

Social media is a modern person's digital voice to project and engage with new ideas and mobilise communities $\unicode{x2013}$ a power shared with extremists. Given the societal risks of unvetted content-moderating algorithms for Extremism, Radicalisation, and Hate speech (ERH) detection, responsible software engineering must understand the who, what, when, where, and why such models are necessary to protect user safety and free expression. Hence, we propose and examine the unique research field of ERH context mining to unify disjoint studies. Specifically, we evaluate the start-to-finish design process from socio-technical definition-building and dataset collection strategies to technical algorithm design and performance. Our 2015-2021 51-study Systematic Literature Review (SLR) provides the first cross-examination of textual, network, and visual approaches to detecting extremist affiliation, hateful content, and radicalisation towards groups and movements. We identify consensus-driven ERH definitions and propose solutions to existing ideological and geographic biases, particularly due to the lack of research in Oceania/Australasia. Our hybridised investigation on Natural Language Processing, Community Detection, and visual-text models demonstrates the dominating performance of textual transformer-based algorithms. We conclude with vital recommendations for ERH context mining researchers and propose an uptake roadmap with guidelines for researchers, industries, and governments to enable a safer cyberspace.


Extremism in the Metaverse

#artificialintelligence

With the rapid growth of Web 3.0 – defined as a decentralised form of the Internet where people have complete control over their own data, more transparency, and far more content accessible to users – human communication will become far easier. Technologies that will facilitate this change include the combination of machine learning, artificial intelligence, and blockchain that will be central pillars of the third version of the Internet. The metaverse, therefore – is a product of Web 3.0, incorporating emerging technologies including augmented reality and allowing users to spend far more time in a virtual world where they can live, work, play, and worship. EMAN has been increasing its focus on the role of extremism in tech and how hate speech and online extremism will evolve as the Internet undergoes significant changes as we know it today. Despite the fact that these metaverses act as efficient socialising tools, global companies are already adapting themselves to this concept by developing new business visions that simulate this new Internet innovation.


Has Artificial Intelligence begun killing humans already?

#artificialintelligence

It would seem that the much-feared death by AI prophesied in dystopian cinema has already begun but there is more than meets the eye finds Satyen K. Bordoloi In the Terminator film series, the aim of the Artificial Intelligence machines is to exterminate humans. In the Matrix world, they use humans as batteries. In films like 2001 A Space Odyssey (1969) or Ex Machina, it is individuals who are at risk. As you can guess, harmful AI has been a favorite film trope even before the real advent of AI which began only after the 2010s. Since then, though, AI watchers would be keenly aware that yes, those movies are exaggerations but that does not mean the dystopian vision they foresaw, isn't coming true – slowly and stealthily. In November 2017, 14-year-old Molly Russell ended her life.