Goto

Collaborating Authors

 cybercriminal


Hackers Hate AI Slop Even More Than You Do

WIRED

Hackers and other cybercriminals are complaining about "AI shit" flooding platforms where they discuss cyberattacks and other illegal activity. "I'm disappointed that you are working to incorporate AI garbage into the site," one annoyed person, posting anonymously, said in an online message. "No-one is asking for this--we want you to improve the site, stop charging for new features." Only, this is not a regular internet user moaning about AI being forced into their favorite app . Instead, they are complaining about a cybercrime forum's plans to introduce more generative AI.


The Download: supercharged scams and studying AI healthcare

MIT Technology Review

Plus: DeepSeek has unveiled its long-awaited new AI model. When ChatGPT was released in late 2022, it showed how easily generative AI could create human-like text. This quickly caught the eye of cybercriminals, who began using LLMs to compose malicious emails. Since then, they've adopted AI for everything from turbocharged phishing and hyperrealistic deepfakes to automated vulnerability scans. Many organizations are now struggling to cope with the sheer volume of cyberattacks. AI is making them faster, cheaper, and easier to carry out, a problem set to worsen as more cybercriminals adopt these tools--and their capabilities improve.


'Help! I need money. It's an emergency': your child's voicemail that could be a scam

The Guardian

By taking a tiny snippet of real audio - just three seconds is enough - from a person, fraudsters can'clone' the individual's voice using freely available AI tools. By taking a tiny snippet of real audio - just three seconds is enough - from a person, fraudsters can'clone' the individual's voice using freely available AI tools. It's an emergency': your child's voicemail that could be a scam Steps to help combat fraud in which criminals use AI-generated replica of a person's voice to deceive victims T he voicemail from your son is alarming. He has just been in a car accident and is highly stressed. He needs money urgently, although it is not clear why, and he gives you some bank details for a transfer.


Reimagining cybersecurity in the era of AI and quantum

MIT Technology Review

The threat landscape is being shaped by two seismic forces. To future-proof their organizations, security leaders must take a proactive stance with a zero trust approach. AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate. The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.


DOGE Put Everyone's Social Security Data at Risk, Whistleblower Claims

WIRED

As students returned to school this week, WIRED spoke to a self-proclaimed leader of a violent online group known as "Purgatory" about a rash of swattings at universities across the US in recent days. The group claims to have ties to the loose cybercriminal network known as The Com, and the alleged Purgatory leader claimed responsibility for calling in hoax active-shooter alerts. Researchers from multiple organizations warned this week that cybercriminals are increasingly using generative AI tools to fuel ransomware attacks, including real situations where cybercriminals without technical expertise are using AI to develop the malware. And a popular, yet enigmatic, shortwave Russian radio station known as UVB-76 seems to have turned into a tool for Kremlin propaganda after decades of mystery and intrigue. Each week, we round up the security and privacy news we didn't cover in depth ourselves.


The Era of AI-Generated Ransomware Has Arrived

WIRED

As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals' use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily--sometimes entirely--to develop actual malware and offer ransomware services to other cybercriminals. Ransomware criminals have recently been identified using Anthropic's large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company's newly released threat intelligence report.


Preventing Jailbreak Prompts as Malicious Tools for Cybercriminals: A Cyber Defense Perspective

Tshimula, Jean Marie, Ndona, Xavier, Nkashama, D'Jeff K., Tardif, Pierre-Martin, Kabanza, Froduald, Frappier, Marc, Wang, Shengrui

arXiv.org Artificial Intelligence

Jailbreak prompts pose a significant threat in AI and cybersecurity, as they are crafted to bypass ethical safeguards in large language models, potentially enabling misuse by cybercriminals. This paper analyzes jailbreak prompts from a cyber defense perspective, exploring techniques like prompt injection and context manipulation that allow harmful content generation, content filter evasion, and sensitive information extraction. We assess the impact of successful jailbreaks, from misinformation and automated social engineering to hazardous content creation, including bioweapons and explosives. To address these threats, we propose strategies involving advanced prompt analysis, dynamic safety protocols, and continuous model fine-tuning to strengthen AI resilience. Additionally, we highlight the need for collaboration among AI researchers, cybersecurity experts, and policymakers to set standards for protecting AI systems. Through case studies, we illustrate these cyber defense approaches, promoting responsible AI practices to maintain system integrity and public trust. \textbf{\color{red}Warning: This paper contains content which the reader may find offensive.}


The words and phrases you should NEVER Google or your computer could get hacked

Daily Mail - Science & tech

Searching on Google might seem like one of the safest things to do online. But cybersecurity experts warn that there are some searches which could put you at serious risk of being hacked. Last week, it was revealed that cybercriminals had hijacked the Google results for'Are Bengal cats legal in Australia?' to infect cat-lovers' computers. Now, experts have revealed the seven other common words and phrases you should never Google. Using a technique called'SEO poisoning' criminals exploit Google's search results to lure unsuspecting victims into websites they control.


Windows users are exposed to over 600 million cyber attacks every day

PCWorld

Microsoft recently released the Microsoft Digital Defense Report 2024, this year's edition of the company's annual cybersecurity report. In the 114-page document, Microsoft reveals -- among other things -- just how much cyber threats have grown over the past year. Cybercriminals have gained access to better resources, including the incorporation of AI tools to bolster their arsenal. They're now better equipped to create fake images, videos, and audio recordings to trick people, to flood job applications with AI-created "perfect" résumés to physically access companies, and much more. But hackers can also use your use of AI services to attack you.


What are digital arrests, the newest deepfake tool used by cybercriminals?

Al Jazeera

An Indian textile baron has revealed that he was duped out of 70 million rupees ( 833,000) by online scammers impersonating federal investigators and even the Supreme Court chief justice. The fraudsters posing as officers from India's Central Bureau of Investigation (CBI) called SP Oswal, chairman and managing director of the textile manufacturer Vardhman, on August 28 and accused him of money laundering. For the next two days, Oswal was under digital surveillance as he was ordered to keep Skype open on his phone 24/7 during which he was interrogated and threatened with arrest. The fraudsters also conducted a fake virtual court hearing with a digital impersonation of Chief Justice of India DY Chandrachud as the judge. Oswal paid the amount after the court verdict via Skype without realising that he was the latest victim of an online scam using a new modus operandi, called "digital arrest".