Goto

Collaborating Authors

 offensive ai


SoK: On the Offensive Potential of AI

Schröer, Saskia Laura, Apruzzese, Giovanni, Human, Soheil, Laskov, Pavel, Anderson, Hyrum S., Bernroider, Edward W. N., Fass, Aurore, Nassi, Ben, Rimmer, Vera, Roli, Fabio, Salam, Samer, Shen, Ashley, Sunyaev, Ali, Wadwha-Brown, Tim, Wagner, Isabel, Wang, Gang

arXiv.org Artificial Intelligence

Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople -- all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.


A Survey on Offensive AI Within Cybersecurity

Girhepuje, Sahil, Verma, Aviral, Raina, Gaurav

arXiv.org Artificial Intelligence

As AI takes on pivotal roles in essential applications, like self-driving vehicles, healthcare diagnosis, and financial services, it becomes a tempting target for malicious actors [16]. This study aims to comprehensively explore the realm of offensive AI, shedding light on its multifaceted dimensions, the techniques involved, its consequences, and potential future implications. Cyberattacks have surged in both complexity and frequency. This is evidenced by the escalating costs associated with data breaches. In 2022, businesses incurred an average loss of $4.35 million, an increase of $0.11 million from the previous year and a 12.7% rise from 2020 [22]. Moreover, the volume of data breaches has reached historic highs, with approximately 15 million records exposed during the third quarter of 2022. Furthermore, the third quarter of 2022 witnessed an alarming 57,116 distributed denial-of-service (DDoS) attacks [78]. Against this backdrop, understanding and mitigating security risks in machine learning (ML) has emerged as a pivotal aspect of cybersecurity.


Defensive vs. offensive AI: Why security teams are losing the AI war

#artificialintelligence

Check out all the on-demand sessions from the Intelligent Security Summit here. Weaponizing artificial intelligence (AI) to attack understaffed enterprises that lack AI and machine learning (ML) expertise is giving bad actors the edge in the ongoing AI cyberwar. Innovating at faster speeds than the most efficient enterprise, capable of recruiting talent to create new malware and test attack techniques, and using AI to alter attack strategies in real time, threat actors have a significant advantage over most enterprises. "AI is already being used by criminals to overcome some of the world's cybersecurity measures," warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. "But AI has to be part of our future, of how we attack and address cybersecurity."


Defensive vs. offensive AI: Why security teams are losing the AI war

#artificialintelligence

Check out all the on-demand sessions from the Intelligent Security Summit here. Weaponizing artificial intelligence (AI) to attack understaffed enterprises that lack AI and machine learning (ML) expertise is giving bad actors the edge in the ongoing AI cyberwar. Innovating at faster speeds than the most efficient enterprise, capable of recruiting talent to create new malware and test attack techniques, and using AI to alter attack strategies in real time, threat actors have a significant advantage over most enterprises. "AI is already being used by criminals to overcome some of the world's cybersecurity measures," warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. "But AI has to be part of our future, of how we attack and address cybersecurity."


How businesses can safeguard against rogue AI - Raconteur

#artificialintelligence

Three decades after a US university student called Robert Tappan Morris was convicted of launching the first widely known malware attack on the internet, cybercrime has become big business, costing the global economy an estimated £2.1m a minute. Internet service provider Beaming reports that cybercriminals are launching increasingly sophisticated attacks on an "unprecedented scale". The pandemic has exacerbated the situation because it has prompted a sharp rise in remote working, which has enabled them to target vulnerabilities in domestic internet connections to attack corporate systems. In 2020, the average UK business faced 686,961 attempts to breach its systems – 20% up on the previous year's figure – according to Beaming. That equates to an attack every 46 seconds.


Attackers use 'offensive AI' to create deepfakes for phishing campaigns

#artificialintelligence

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out. AI enables organizations to automate tasks, extract information, and create media nearly indistinguishable from the real thing. In particular, cyberattackers can use AI to enhance their attacks and expand their campaigns. A recent survey published by researchers at Microsoft, Purdue, and Ben-Gurion University, among others, explores the threat of this "offensive AI" on organizations.


The Threat of Offensive AI to Organizations

Mirsky, Yisroel, Demontis, Ambra, Kotak, Jaidip, Shankar, Ram, Gelei, Deng, Yang, Liu, Zhang, Xiangyu, Lee, Wenke, Elovici, Yuval, Biggio, Battista

arXiv.org Artificial Intelligence

AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI (such as machine learning) to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this survey, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary's methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a user study spanning industry and academia, we rank the AI threats and provide insights on the adversaries.


Preparing for AI-enabled cyberattacks – MIT Technology Review

#artificialintelligence

Cyberattacks continue to grow in prevalence and sophistication. With the ability to disrupt business operations, wipe out critical data, and cause reputational damage, they pose an existential threat to businesses, critical services, and infrastructure. Today's new wave of attacks is outsmarting and outpacing humans, and even starting to incorporate artificial intelligence (AI). What's known as "offensive AI" will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools. Some of the world's largest and most trusted organizations have already fallen victim to damaging cyberattacks, undermining their ability to safeguard critical data.


The future of cybersecurity will be about 'fighting fire with fire'

#artificialintelligence

In many ways, cybersecurity has always been a contest; vendors race to develop security products that can identify and mitigate any threats, while cybercriminals aim to develop malware and exploits capable of bypassing protections. With the emergence of artificial intelligence (AI), however, this combative exchange between attackers and defenders is about to become more complex and increasingly ferocious. According to Max Heinemeyer, Director of Threat Hunting at AI security firm Darktrace, it is only a matter of time before AI is co-opted by malicious actors to automate attacks and expedite the discovery of vulnerabilities. "We don't know precisely when offensive AI will begin to emerge, but it could already be happening behind closed doors," he told TechRadar Pro. "If we are able to [build complex AI products] here in our labs with a few researchers, imagine what nation states that invest heavily in cyberwar could be capable of." When this trend starts to play out, as seems inevitable, Heinemeyer says cybersecurity will become a "battle of the algorithms", with AI pitted against AI.


Research Finds Supercharged AI Cyberattacks are Unavoidable

#artificialintelligence

New research from AI cybersecurity firm Darktrace revealed that most security leaders are preparing for AI-powered cyberattacks. According to the research paper titled, "The Emergence Of Offensive AI," conducted by Forrester Consulting on behalf of Darktrace, 88% of decision makers in the security industry believe offensive AI is inevitable, with 50% of them expecting the industry to see these attacks in coming years. The research also highlighted that 77% of respondents expect weaponized AI to lead to an increase in the scale of cyberattacks, while 66% of them felt that it would lead to new attacks. Over 80% of security decision-makers opined that organizations require advanced cybersecurity defenses to combat offensive AI, and 75% of security leaders are concerned over business disruption. The findings are based on the responses from security leaders across different industries, including retail, financial services, and manufacturing sectors.