Goto

Collaborating Authors

 radicalisation


Reading the post-riot posts: how we traced far-right radicalisation across 51,000 Facebook messages

The Guardian

Jail sentences for those who made posts about the UK riots in summer 2024 have become a flashpont for online criticism. Jail sentences for those who made posts about the UK riots in summer 2024 have become a flashpont for online criticism. More than 1,100 people have been charged in connection to the summer 2024 riots. A small number of them were charged for offences related to their online activity. Their jail sentences - which ranged from 12 weeks to seven years - became a flashpoint for online criticism.


Video games can't escape their role in the radicalisation of young men Keith Stuart

The Guardian

There is a lot of attention on young men and toxic masculinity at the moment. The devastating Netflix drama Adolescence, about a 13-year-old boy accused of murdering a girl after being radicalised by the online manosphere, has drawn attention to the problem through the sheer force of its brilliant writing and a blistering lead performance from teenager Owen Cooper. Recently, former England football manager Gareth Southgate gave a speech about the state of boyhood in the UK, specifically about how young men, lacking moral mentors, are turning to gambling and video gaming, thereby disconnecting from society and immersing themselves in predominantly male online communities where misogyny and racism are often rife. There has been some kickback in the gaming press to the idea that games have provided a less-than-ideal environment for boys, but even those of us who have played and enjoyed games all our lives need to face up to the fact that gaming forums, message boards, streaming platforms and social media groups are awash with disturbing hate speech and violent rhetoric. Honestly, we have known this for a while.


Statistical Analysis of Risk Assessment Factors and Metrics to Evaluate Radicalisation in Twitter

Lara-Cabrera, Raul, Gonzalez-Pardo, Antonio, Camacho, David

arXiv.org Artificial Intelligence

Nowadays, Social Networks have become an essential communication tools producing a large amount of information about their users and their interactions, which can be analysed with Data Mining methods. In the last years, Social Networks are being used to radicalise people. In this paper, we study the performance of a set of indicators and their respective metrics, devoted to assess the risk of radicalisation of a precise individual on three different datasets. Keyword-based metrics, even though depending on the written language, performs well when measuring frustration, perception of discrimination as well as declaration of negative and positive ideas about Western society and Jihadism, respectively. However, metrics based on frequent habits such as writing ellipses are not well enough to characterise a user in risk of radicalisation. The paper presents a detailed description of both, the set of indicators used to asses the radicalisation in Social Networks and the set of datasets used to evaluate them. Finally, an experimental study over these datasets are carried out to evaluate the performance of the metrics considered.


Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech

Govers, Jarod, Feldman, Philip, Dant, Aaron, Patros, Panos

arXiv.org Artificial Intelligence

Social media is a modern person's digital voice to project and engage with new ideas and mobilise communities $\unicode{x2013}$ a power shared with extremists. Given the societal risks of unvetted content-moderating algorithms for Extremism, Radicalisation, and Hate speech (ERH) detection, responsible software engineering must understand the who, what, when, where, and why such models are necessary to protect user safety and free expression. Hence, we propose and examine the unique research field of ERH context mining to unify disjoint studies. Specifically, we evaluate the start-to-finish design process from socio-technical definition-building and dataset collection strategies to technical algorithm design and performance. Our 2015-2021 51-study Systematic Literature Review (SLR) provides the first cross-examination of textual, network, and visual approaches to detecting extremist affiliation, hateful content, and radicalisation towards groups and movements. We identify consensus-driven ERH definitions and propose solutions to existing ideological and geographic biases, particularly due to the lack of research in Oceania/Australasia. Our hybridised investigation on Natural Language Processing, Community Detection, and visual-text models demonstrates the dominating performance of textual transformer-based algorithms. We conclude with vital recommendations for ERH context mining researchers and propose an uptake roadmap with guidelines for researchers, industries, and governments to enable a safer cyberspace.


An N Time-Slice Dynamic Chain Event Graph

Collazo, Rodrigo A., Smith, Jim Q.

arXiv.org Machine Learning

The Dynamic Chain Event Graph (DCEG) is able to depict many classes of discrete random processes exhibiting asymmetries in their developments and context-specific conditional probabilities structures. However, paradoxically, this very generality has so far frustrated its wide application. So in this paper we develop an object-oriented method to fully analyse a particularly useful and feasibly implementable new subclass of these graphical models called the N Time-Slice DCEG (NT-DCEG). After demonstrating a close relationship between an NT-DCEG and a specific class of Markov processes, we discuss how graphical modellers can exploit this connection to gain a deep understanding of their processes. We also show how to read from the topology of this graph context-specific independence statements that can then be checked by domain experts. Our methods are illustrated throughout using examples of dynamic multivariate processes describing inmate radicalisation in a prison.


From the future of bitcoin to Facebook, 2018 in technology

The Guardian

Both of the major smart home platforms have a long-running problem with "discoverability": it's very hard to let users know what their devices can do, particularly if they're always improving thanks to rapid software updates. Amazon and Google are constantly experimenting with ways to get around this, but so far they have been timid. Amazon sends a weekly email, while Google includes some tips in its app. Expect to see them be bolder, particularly as powerful rivals such as Apple appear on the scene with worse AI but better sound. So don't be surprised if your Google Home or Amazon Echo begin to talk back, rather than simply following commands.