cybercrime
AI Rewrites the Rules Of Phishing, Cybercrime
It used to be just a sci-fi nightmare scenario, but today, AI phishing is real, and it's costing companies millions. We've already touched upon this one, but the Hong Kong phishing scam that targeted an employee at Arup deserves a deeper dive. The employee was tricked by deepfake versions of her CFO and colleagues into transferring HK 200 million across 15 transactions. The case has been widely reported and confirmed by the Hong Kong police. Every face and voice was AI-generated.
BEACON: A Unified Behavioral-Tactical Framework for Explainable Cybercrime Analysis with Large Language Models
Sachdeva, Arush, Saravanan, Rajendraprasad, Sarkar, Gargi, Vemuri, Kavita, Shukla, Sandeep Kumar
Cybercrime has emerged as one of the most pervasive and economically destructive consequences of global digitalization. Contemporary online fraud and deception-based crimes now account for unprecedented financial losses worldwide, exceeding trillions of United States dollars (USD) annually (Morgan, 2016), while also inflicting severe psychological, social, and reputational harm on victims. Unlike classical cyberattacks targeting systems and networks, modern cybercrime increasingly exploits human vulnerabilities rather than purely technical weaknesses, relying on deception, persuasion, impersonation, emotional coercion, and trust manipulation as primary attack vectors (Holt, 2019; Yao, Zheng, Wu, Wu, Gao, Wang and Yang, 2025; Sarkar and Shukla, 2023; Sarkar, Singh, Kumar and Shukla, 2023). Existing cybersecurity frameworks, such as the Cyber Kill Chain and the MITRE ATT&CK framework, provide powerful abstractions for understanding technically sophisticated cyberattacks targeting enterprise systems and critical infrastructure (MITRE Corporation, 2025b,a). However, these models are fundamentally system-centric: they describe how adversaries compromise digital infrastructure, escalate privileges, and exfiltrate data. In contrast, cybercrime, particularly scams, fraud, impersonation, and extortion, primarily targets individual decision-making processes (Louderback and Antonaccio, 2017), often without exploiting any software vulnerability at all. Consequently, the investigative needs of cybercrime differ substantially from those of traditional cyberattacks.
Unintentional Consequences: Generative AI Use for Cybercrime
Luu, Truong Jack, Samuel, Binny M.
The democratization of generative AI introduces new forms of human-AI interaction and raises urgent safety, ethical, and cybersecurity concerns. We develop a socio-technical explanation for how generative AI enables and scales cybercrime. Drawing on affordance theory and technological amplification, we argue that generative AI systems create new action possibilities for cybercriminals and magnify pre-existing malicious intent by lowering expertise barriers and increasing attack efficiency. To illustrate this framework, we conduct interrupted time series analyses of two large datasets: (1) 464,190,074 malicious IP address reports from AbuseIPDB, and (2) 281,115 cryptocurrency scam reports from Chainabuse. Using November 30, 2022, as a high-salience public-access shock, we estimate the counterfactual trajectory of reported cyber abuse absent the release, providing an early-warning impact assessment of a general-purpose AI technology. Across both datasets, we observe statistically significant post-intervention increases in reported malicious activity, including an immediate increase of over 1.12 million weekly malicious IP reports and about 722 weekly cryptocurrency scam reports, with sustained growth in the latter. We discuss implications for AI governance, platform-level regulation, and cyber resilience, emphasizing the need for multi-layer socio-technical strategies that help key stakeholders maximize AI's benefits while mitigating its growing cybercrime risks.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.67)
I'm an FBI spy hunter. This is the biggest threat we face... and it could destroy us all
Robert Hanssen was the most damaging spy in American history. A senior FBI agent turned traitor, he sold classified secrets to Russia for more than two decades, compromising US intelligence at the highest levels. I was the undercover operative assigned to stop him. Working inside FBI headquarters, I became Hanssen's assistant in name, while secretly gathering the evidence that would lead to his arrest. That operation became the basis of my book Gray Day and the film Breach, in which Ryan Phillippe portrayed me. Since then, my path has evolved.
- North America > United States (0.70)
- Europe > Russia (0.25)
- Asia > Russia (0.25)
- Asia > China (0.05)
Two-step Automated Cybercrime Coded Word Detection using Multi-level Representation Learning
Kim, Yongyeon, On, Byung-Won, Lee, Ingyu
In social network service platforms, crime suspects are likely to use cybercrime coded words for communication by adding criminal meanings to existing words or replacing them with similar words. For instance, the word 'ice' is often used to mean methamphetamine in drug crimes. To analyze the nature of cybercrime and the behavior of criminals, quickly detecting such words and further understanding their meaning are critical. In the automated cybercrime coded word detection problem, it is difficult to collect a sufficient amount of training data for supervised learning and to directly apply language models that utilize context information to better understand natural language. To overcome these limitations, we propose a new two-step approach, in which a mean latent vector is constructed for each cybercrime through one of five different AutoEncoder models in the first step, and cybercrime coded words are detected based on multi-level latent representations in the second step. Moreover, to deeply understand cybercrime coded words detected through the two-step approach, we propose three novel methods: (1) Detection of new words recently coined, (2) Detection of words frequently appeared in both drug and sex crimes, and (3) Automatic generation of word taxonomy. According to our experimental results, among various AutoEncoder models, the stacked AutoEncoder model shows the best performance. Additionally, the F1-score of the two-step approach is 0.991, which is higher than 0.987 and 0.903 of the existing dark-GloVe and dark-BERT models. By analyzing the experimental results of the three proposed methods, we can gain a deeper understanding of drug and sex crimes.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (4 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Staying One Step Ahead of Hackers When It Comes to AI
If you've been creeping around underground tech forums lately, you might have seen advertisements for a new program called WormGPT. The program is an AI-powered tool for cybercriminals to automate the creation of personalized phishing emails; although it sounds a bit like ChatGPT, WormGPT is not your friendly neighborhood AI. ChatGPT launched in November 2022 and, since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity. In 2024, generative AI is poised to facilitate new kinds of transnational--and translingual--cybercrime.
Cybercrime, AI supremacy and the metaverse: the tech stories that will dominate 2024
Partway through 2023, I caught up with a respected, high-ranking tech writer at another publication. We gossiped and nattered, and, a bit exasperated, empathised with each other: we were run ragged. The last two years have raised the stakes for what tech journalists do from serving a small niche community to covering stories that have an impact on the wider world. It's also down to the characters involved and what's at stake. Tech journalists have lived on fast-forward since Elon Musk first lodged his bid to take over Twitter – now X – in April 2022.
- Media > News (0.71)
- Information Technology > Security & Privacy (0.66)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.51)
The shadowy underbelly of AI
Check to see if your name, number, or other personal data is online without you even knowing - plus how to remove it. The proliferation of artificial intelligence (AI) in our daily lives has indisputably been a boon, remolding industries and redefining the paradigms of our routines. However, the rosy picture fades when one steps into the shadows and discerns the malignant uses AI is being tailored for. The emergence of AI tools such as WormGPT and FraudGPT, specifically designed for cybercrime, is a stark reminder of this reality. The odious advent of WormGPT, camouflaged in the guise of cutting-edge technology, has reverberated through the murky corridors of the cyber underworld.
- Asia > North Korea (0.05)
- Asia > China (0.05)
How AI Is Changing Cybersecurity--Pros and Cons - Eduaz
As a CTO with over a decade and a half of experience in the ever-changing field of cybersecurity, I've witnessed the enormous impact that artificial intelligence (AI) has had on the broad technological landscape. In addition, I have seen how AI-based solutions have emerged as an important aspect of improving processes in a variety of fields and disciplines over the years. The capacity of AI-based machine learning (ML) models to recognize patterns and make data-driven decisions and inferences represents a highly innovative strategy for rapidly identifying malware, directing incident response, and even anticipating potential security breaches. AI's role in cybersecurity, how it can be used to improve corporate and user security, and its limitations. Data is being generated at an exponential rate in the modern era of digitization, and an increasing amount of metadata is being saved or received online, either directly or indirectly. Furthermore, in order for data to reach its intended location or be used for specific purposes, it is frequently necessary to send it across a network or store it in a specific database or server.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Cybercrime: be careful what you tell your chatbot helper…
Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI's GPT-4, Google's Bard and Microsoft's Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this isn't worrying enough, a third area of concern has opened up – illustrated by Italy's recent ban of ChatGPT on privacy grounds. The Italian data regulator has voiced concerns over the model used by ChatGPT owner OpenAI and announced it would investigate whether the firm had broken strict European data protection laws. Chatbots can be useful for work and personal tasks, but they collect vast amounts of data.
- Europe > Italy (0.25)
- North America > United States (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.57)