Goto

Collaborating Authors

 cybercriminal


Reimagining cybersecurity in the era of AI and quantum

MIT Technology Review

The threat landscape is being shaped by two seismic forces. To future-proof their organizations, security leaders must take a proactive stance with a zero trust approach. AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate. The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.


DOGE Put Everyone's Social Security Data at Risk, Whistleblower Claims

WIRED

As students returned to school this week, WIRED spoke to a self-proclaimed leader of a violent online group known as "Purgatory" about a rash of swattings at universities across the US in recent days. The group claims to have ties to the loose cybercriminal network known as The Com, and the alleged Purgatory leader claimed responsibility for calling in hoax active-shooter alerts. Researchers from multiple organizations warned this week that cybercriminals are increasingly using generative AI tools to fuel ransomware attacks, including real situations where cybercriminals without technical expertise are using AI to develop the malware. And a popular, yet enigmatic, shortwave Russian radio station known as UVB-76 seems to have turned into a tool for Kremlin propaganda after decades of mystery and intrigue. Each week, we round up the security and privacy news we didn't cover in depth ourselves.


The Era of AI-Generated Ransomware Has Arrived

WIRED

As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals' use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily--sometimes entirely--to develop actual malware and offer ransomware services to other cybercriminals. Ransomware criminals have recently been identified using Anthropic's large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company's newly released threat intelligence report.


Preventing Jailbreak Prompts as Malicious Tools for Cybercriminals: A Cyber Defense Perspective

Tshimula, Jean Marie, Ndona, Xavier, Nkashama, D'Jeff K., Tardif, Pierre-Martin, Kabanza, Froduald, Frappier, Marc, Wang, Shengrui

arXiv.org Artificial Intelligence

Jailbreak prompts pose a significant threat in AI and cybersecurity, as they are crafted to bypass ethical safeguards in large language models, potentially enabling misuse by cybercriminals. This paper analyzes jailbreak prompts from a cyber defense perspective, exploring techniques like prompt injection and context manipulation that allow harmful content generation, content filter evasion, and sensitive information extraction. We assess the impact of successful jailbreaks, from misinformation and automated social engineering to hazardous content creation, including bioweapons and explosives. To address these threats, we propose strategies involving advanced prompt analysis, dynamic safety protocols, and continuous model fine-tuning to strengthen AI resilience. Additionally, we highlight the need for collaboration among AI researchers, cybersecurity experts, and policymakers to set standards for protecting AI systems. Through case studies, we illustrate these cyber defense approaches, promoting responsible AI practices to maintain system integrity and public trust. \textbf{\color{red}Warning: This paper contains content which the reader may find offensive.}


The words and phrases you should NEVER Google or your computer could get hacked

Daily Mail - Science & tech

Searching on Google might seem like one of the safest things to do online. But cybersecurity experts warn that there are some searches which could put you at serious risk of being hacked. Last week, it was revealed that cybercriminals had hijacked the Google results for'Are Bengal cats legal in Australia?' to infect cat-lovers' computers. Now, experts have revealed the seven other common words and phrases you should never Google. Using a technique called'SEO poisoning' criminals exploit Google's search results to lure unsuspecting victims into websites they control.


Windows users are exposed to over 600 million cyber attacks every day

PCWorld

Microsoft recently released the Microsoft Digital Defense Report 2024, this year's edition of the company's annual cybersecurity report. In the 114-page document, Microsoft reveals -- among other things -- just how much cyber threats have grown over the past year. Cybercriminals have gained access to better resources, including the incorporation of AI tools to bolster their arsenal. They're now better equipped to create fake images, videos, and audio recordings to trick people, to flood job applications with AI-created "perfect" résumés to physically access companies, and much more. But hackers can also use your use of AI services to attack you.


What are digital arrests, the newest deepfake tool used by cybercriminals?

Al Jazeera

An Indian textile baron has revealed that he was duped out of 70 million rupees ( 833,000) by online scammers impersonating federal investigators and even the Supreme Court chief justice. The fraudsters posing as officers from India's Central Bureau of Investigation (CBI) called SP Oswal, chairman and managing director of the textile manufacturer Vardhman, on August 28 and accused him of money laundering. For the next two days, Oswal was under digital surveillance as he was ordered to keep Skype open on his phone 24/7 during which he was interrogated and threatened with arrest. The fraudsters also conducted a fake virtual court hearing with a digital impersonation of Chief Justice of India DY Chandrachud as the judge. Oswal paid the amount after the court verdict via Skype without realising that he was the latest victim of an online scam using a new modus operandi, called "digital arrest".


I'm Neuralink's patient zero - why I chose to get Elon Musk's brain chip even though it could be hacked

Daily Mail - Science & tech

A trip to a Pennsylvania lake turned into a tragedy for one man who was left paralyzed after running into the water for a swim. Noland Arbaugh, 29, recalls being hit on the side of the head by another person, leaving him unable to move his body from the shoulders down when he woke up face down in the lake. The 2016 accident led him on a journey to become Neuralink's patient zero this year, which saw him receive a brain implant that lets him control computers and other devices. 'I was a little worried it wouldn't work because [that could happen] with the first of anything, but I wanted to be the first to test all of that out,' he said in an interview on The Kim Komando Show. 'If anyone was going to go through it, to experience the downsides, I wanted to take that on as much as possible to help people after me.'


Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features

Tshimula, Jean Marie, Nkashama, D'Jeff K., Muabila, Jean Tshibangu, Galekwa, René Manassé, Kanda, Hugues, Dialufuma, Maximilien V., Didier, Mbuyi Mukendi, Kalonji, Kalala, Mundele, Serge, Lenye, Patience Kinshie, Basele, Tighana Wenge, Ilunga, Aristarque, Mayemba, Christian N., Kasoro, Nathanaël M., Kasereka, Selain K., Mikese, Hardy, Tardif, Pierre-Martin, Frappier, Marc, Kabanza, Froduald, Chikhaoui, Belkacem, Wang, Shengrui, Sumbu, Ali Mulenda, Ndona, Xavier, Intudi, Raoul Kienge-Kienge

arXiv.org Artificial Intelligence

The increasing sophistication of cyber threats necessitates innovative approaches to cybersecurity. In this paper, we explore the potential of psychological profiling techniques, particularly focusing on the utilization of Large Language Models (LLMs) and psycholinguistic features. We investigate the intersection of psychology and cybersecurity, discussing how LLMs can be employed to analyze textual data for identifying psychological traits of threat actors. We explore the incorporation of psycholinguistic features, such as linguistic patterns and emotional cues, into cybersecurity frameworks. Our research underscores the importance of integrating psychological perspectives into cybersecurity practices to bolster defense mechanisms against evolving threats.


Tech expert warns 2024 will see 'explosion of AI-powered cybercrime'- and 27 US government agencies are currently using these systems in place of human

Daily Mail - Science & tech

A tech expert has warned that new advances in AI-powered technology will lead to an'explosion' in cybercrime in 2024. Shawn Henry, the chief security officer for CrowdStrike, recently shared how cybercriminals can use AI to sneak through individuals' cybersecurity defenses, spread misinformation, or infiltrate corporate networks. Cybercriminals can use AI to mislead people into believing false narratives during the election season and potentially giving up sensitive information, said the retired executive assistant director of the Federal Bureau of Investigation (FBI). The cybersecurity veteran's warning comes when AI has been given more jobs than ever, including in the US federal and state governments. Twenty-seven departments of the US federal government have deployed AI in some way, and many states have, too.