Goto

Collaborating Authors

 ai hacker


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


Machine Learning Featurizations for AI Hacking of Political Systems

Sanders, Nathan E, Schneier, Bruce

arXiv.org Artificial Intelligence

What would the inputs be to a machine whose output is the destabilization of a robust democracy, or whose emanations could disrupt the political power of nations? In the recent essay "The Coming AI Hackers," Schneier (2021) proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems at speeds far greater than humans' ability to recognize and respond to such threats. This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" (input specification and transformation) frameworks for AI hacking. Focusing on the political domain, we develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political systems. We explore possible data models, datasets, predictive tasks, and actionable applications associated with each framework. We speculate about the likely practical impact and feasibility of such models, and conclude by discussing their ethical implications.


We're Not Prepared for AI Hackers, Security Expert Warns

#artificialintelligence

Bruce Schneier knows we all have a lot to worry about these days, but the security researcher for the Harvard Kennedy School has one more thing that may keep you up at night: AI hackers. Schneier's eye-opening talk at the all-virtual RSAC 2021 conference examined the consequences, positive and negative, of artificial intelligence learning to hack all kinds of systems. The ongoing COVID-19 pandemic forced him and other RSAC participants to present via video this year, but that comfortable setting didn't blunt Schneier's concerns. "Any good AI system will naturally find hacks," said Schneier. They find novel solutions because they lack human context, and the consequence is that some of those solutions will break the expectations humans have--hence, a hack. This is especially true for computers.


We're Not Prepared for AI Hackers, Security Expert Warns

#artificialintelligence

Bruce Schneier knows we all have a lot to worry about these days, but the security researcher for the Harvard Kennedy School has one more thing that may keep you up at night: AI hackers. Schneier's eye-opening talk at the all-virtual RSAC 2021 conference examined the consequences, positive and negative, of artificial intelligence learning to hack all kinds of systems. The ongoing COVID-19 pandemic forced him and other RSAC participants to present via video this year, but that comfortable setting didn't blunt Schneier's concerns. "Any good AI system will naturally find hacks," said Schneier. They find novel solutions because they lack human context, and the consequence is that some of those solutions will break the expectations humans have--hence, a hack.


#RSAC: Bruce Schneier Warns of the Coming AI Hackers

#artificialintelligence

Artificial intelligence, commonly referred to as AI, represents both a risk and a benefit to the security of society, according to Bruce Schneier, security technologist, researcher, and lecturer at Harvard Kennedy School. Schneier made his remarks about the risks of AI in an afternoon keynote session at the 2021 RSA Conference on May 17. Hacking for Schneier isn't an action that is evil by definition; rather, it's about subverting a system or a set of rules in a way that is unanticipated or unwanted by a system's designers. "All systems of rules can be hacked," Schneier said. "Even the best-thought-out sets of rules will be incomplete or inconsistent, you'll have ambiguities and things that designers haven't thought of, and as long as there are people who want to subvert the goals in a system, there will be hacks." Schneier highlighted a key challenge with hacking that is conducted by some form of AI: it might be difficult to detect.


Cybersecurity in 2025: the skills we'll need to tackle threats of the future

#artificialintelligence

Two-thirds of large UK firms were targeted by cybercriminals in 2016. As the number of attacks continues to rise, what skills will the next generation of professionals need to protect us from AI hackers, rogue self-driving cars and financial ruin? The global cost of cybercrime is predicted to reach £4.9 trillion annually by 2021 and new cybersecurity trends are emerging. To fight future threats, society must develop the next generation of cyber skills. But how do businesses identify weaknesses in their cybersecurity before they're hacked?


Hacking and AI: Moral panic vs. real problems

#artificialintelligence

OK, they didn't literally run for any hills. But the EFF wrote a very panicked blog post warning of the dangers to come if an AI trained to hack wasn't parented properly. The histrionic post made a few headlines, but missed the point of the competition entirely. If the AI playing Def Con's all-machine Capture the Flag had feelings, they would've been very hurt indeed. The seven different AI agents were projects of teams that hailed from around the world, coming together to compete for a 2 million purse.


DARPA is Giving 2 Million To The Person Who Creates An AI Hacker

#artificialintelligence

DARPA has been mostly focusing on making new things, pushing the boundary of what is possible to build. Case in point: This new challenge it issued to hackers. DARPA has just started the final round of the Cyber Grand Challenge, a competition among seven fully-autonomous computers to defend themselves and point out flaws in a DARPA computer. The challenge aims to solve a persistent problem in computer systems. Flaws in software often go unnoticed for around 312 days, time which can be exploited by hackers.