Goto

Collaborating Authors

 cybercrime


BEACON: A Unified Behavioral-Tactical Framework for Explainable Cybercrime Analysis with Large Language Models

Sachdeva, Arush, Saravanan, Rajendraprasad, Sarkar, Gargi, Vemuri, Kavita, Shukla, Sandeep Kumar

arXiv.org Artificial Intelligence

Cybercrime has emerged as one of the most pervasive and economically destructive consequences of global digitalization. Contemporary online fraud and deception-based crimes now account for unprecedented financial losses worldwide, exceeding trillions of United States dollars (USD) annually (Morgan, 2016), while also inflicting severe psychological, social, and reputational harm on victims. Unlike classical cyberattacks targeting systems and networks, modern cybercrime increasingly exploits human vulnerabilities rather than purely technical weaknesses, relying on deception, persuasion, impersonation, emotional coercion, and trust manipulation as primary attack vectors (Holt, 2019; Yao, Zheng, Wu, Wu, Gao, Wang and Yang, 2025; Sarkar and Shukla, 2023; Sarkar, Singh, Kumar and Shukla, 2023). Existing cybersecurity frameworks, such as the Cyber Kill Chain and the MITRE ATT&CK framework, provide powerful abstractions for understanding technically sophisticated cyberattacks targeting enterprise systems and critical infrastructure (MITRE Corporation, 2025b,a). However, these models are fundamentally system-centric: they describe how adversaries compromise digital infrastructure, escalate privileges, and exfiltrate data. In contrast, cybercrime, particularly scams, fraud, impersonation, and extortion, primarily targets individual decision-making processes (Louderback and Antonaccio, 2017), often without exploiting any software vulnerability at all. Consequently, the investigative needs of cybercrime differ substantially from those of traditional cyberattacks.


I'm an FBI spy hunter. This is the biggest threat we face... and it could destroy us all

Daily Mail - Science & tech

Robert Hanssen was the most damaging spy in American history. A senior FBI agent turned traitor, he sold classified secrets to Russia for more than two decades, compromising US intelligence at the highest levels. I was the undercover operative assigned to stop him. Working inside FBI headquarters, I became Hanssen's assistant in name, while secretly gathering the evidence that would lead to his arrest. That operation became the basis of my book Gray Day and the film Breach, in which Ryan Phillippe portrayed me. Since then, my path has evolved.


Two-step Automated Cybercrime Coded Word Detection using Multi-level Representation Learning

Kim, Yongyeon, On, Byung-Won, Lee, Ingyu

arXiv.org Artificial Intelligence

In social network service platforms, crime suspects are likely to use cybercrime coded words for communication by adding criminal meanings to existing words or replacing them with similar words. For instance, the word 'ice' is often used to mean methamphetamine in drug crimes. To analyze the nature of cybercrime and the behavior of criminals, quickly detecting such words and further understanding their meaning are critical. In the automated cybercrime coded word detection problem, it is difficult to collect a sufficient amount of training data for supervised learning and to directly apply language models that utilize context information to better understand natural language. To overcome these limitations, we propose a new two-step approach, in which a mean latent vector is constructed for each cybercrime through one of five different AutoEncoder models in the first step, and cybercrime coded words are detected based on multi-level latent representations in the second step. Moreover, to deeply understand cybercrime coded words detected through the two-step approach, we propose three novel methods: (1) Detection of new words recently coined, (2) Detection of words frequently appeared in both drug and sex crimes, and (3) Automatic generation of word taxonomy. According to our experimental results, among various AutoEncoder models, the stacked AutoEncoder model shows the best performance. Additionally, the F1-score of the two-step approach is 0.991, which is higher than 0.987 and 0.903 of the existing dark-GloVe and dark-BERT models. By analyzing the experimental results of the three proposed methods, we can gain a deeper understanding of drug and sex crimes.


Staying One Step Ahead of Hackers When It Comes to AI

WIRED

If you've been creeping around underground tech forums lately, you might have seen advertisements for a new program called WormGPT. The program is an AI-powered tool for cybercriminals to automate the creation of personalized phishing emails; although it sounds a bit like ChatGPT, WormGPT is not your friendly neighborhood AI. ChatGPT launched in November 2022 and, since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity. In 2024, generative AI is poised to facilitate new kinds of transnational--and translingual--cybercrime.


Cybercrime, AI supremacy and the metaverse: the tech stories that will dominate 2024

The Guardian

Partway through 2023, I caught up with a respected, high-ranking tech writer at another publication. We gossiped and nattered, and, a bit exasperated, empathised with each other: we were run ragged. The last two years have raised the stakes for what tech journalists do from serving a small niche community to covering stories that have an impact on the wider world. It's also down to the characters involved and what's at stake. Tech journalists have lived on fast-forward since Elon Musk first lodged his bid to take over Twitter – now X – in April 2022.


The shadowy underbelly of AI

FOX News

Check to see if your name, number, or other personal data is online without you even knowing - plus how to remove it. The proliferation of artificial intelligence (AI) in our daily lives has indisputably been a boon, remolding industries and redefining the paradigms of our routines. However, the rosy picture fades when one steps into the shadows and discerns the malignant uses AI is being tailored for. The emergence of AI tools such as WormGPT and FraudGPT, specifically designed for cybercrime, is a stark reminder of this reality. The odious advent of WormGPT, camouflaged in the guise of cutting-edge technology, has reverberated through the murky corridors of the cyber underworld.


How AI Is Changing Cybersecurity--Pros and Cons - Eduaz

#artificialintelligence

As a CTO with over a decade and a half of experience in the ever-changing field of cybersecurity, I've witnessed the enormous impact that artificial intelligence (AI) has had on the broad technological landscape. In addition, I have seen how AI-based solutions have emerged as an important aspect of improving processes in a variety of fields and disciplines over the years. The capacity of AI-based machine learning (ML) models to recognize patterns and make data-driven decisions and inferences represents a highly innovative strategy for rapidly identifying malware, directing incident response, and even anticipating potential security breaches. AI's role in cybersecurity, how it can be used to improve corporate and user security, and its limitations. Data is being generated at an exponential rate in the modern era of digitization, and an increasing amount of metadata is being saved or received online, either directly or indirectly. Furthermore, in order for data to reach its intended location or be used for specific purposes, it is frequently necessary to send it across a network or store it in a specific database or server.


Cybercrime: be careful what you tell your chatbot helper…

The Guardian

Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI's GPT-4, Google's Bard and Microsoft's Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this isn't worrying enough, a third area of concern has opened up – illustrated by Italy's recent ban of ChatGPT on privacy grounds. The Italian data regulator has voiced concerns over the model used by ChatGPT owner OpenAI and announced it would investigate whether the firm had broken strict European data protection laws. Chatbots can be useful for work and personal tasks, but they collect vast amounts of data.


AI chatbots making it harder to spot phishing emails, say experts

#artificialintelligence

Chatbots are taking away a key line of defence against fraudulent phishing emails by removing glaring grammatical and spelling errors, according to experts. The warning comes as policing organisation Europol issues an international advisory about the potential criminal use of ChatGPT and other "large language models". Phishing emails are a well-known weapon of cybercriminals and fool recipients into clicking on a link that downloads malicious software or tricks them into handing over personal details such as passwords or pin numbers. Half of all adults in England and Wales reported receiving a phishing email last year, according to the Office for National Statistics, while UK businesses have identified phishing attempts as the most common form of cyber-threat. However, a basic flaw in some phishing attempts – poor spelling and grammar – is being rectified by artificial intelligence (AI) chatbots, which can correct the errors that trip spam filters or alert human readers.


Council Post: How AI Is Disrupting And Transforming The Cybersecurity Landscape

#artificialintelligence

Hari Ravichandran is the CEO and Founder of Aura, a leading provider of comprehensive digital security solutions for consumers. One of the reasons for the rapid acceleration of cybercrime is the lower barrier to entry for malicious actors. Cybercriminals have evolved their business models, including now offering subscription services and starter kits. The use of large language models (LLMs) like ChatGPT to write malicious code also highlights the potential challenges to cybersecurity. Because of these threats, all business leaders in today's digital world must be knowledgeable about the developments of AI in cybersecurity.