Artificial intelligence (AI) is proving to be one of the most influential and game-changing technology advancements in the business world. As more and more enterprises go digital, companies all over the globe are constantly engineering new ways to implement AI-based functions into practically every platform and software tool at their disposal. It should come as no surprise, then, that AI is affecting cybersecurity – but it's affecting it in both positive and negative ways. Cybercrime is a massively lucrative business, and one the greatest threats to every company in the world. Cybersecurity Ventures' Official 2019 Annual Cybercrime Report predicts cybercrime will cost the world $6 trillion annually by 2021 – up from $3 trillion in 2015.
When it comes to identifying existential threats posed by technological innovations, the popular imagination summons visions of Terminator, The Matrix, and I, Robot -- dystopias ruled by robot overlords who exploit and exterminate people en masse. In these speculative futures, a combination of super-intelligence and evil intentions leads computers to annihilate or enslave the human race.
Artificial intelligence systems can be attacked. The methods underpinning the state-of-the-art artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called an "artificial intelligence attack." Using this attack, adversaries can manipulate these systems in order to alter their behavior to serve a malicious end goal. As artificial intelligence systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant effects on the security of the country. These "AI attacks" are fundamentally different from traditional cyberattacks. Unlike traditional cyberattacks that are caused by "bugs" or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. Further, AI attacks fundamentally expand the set of entities that can be used to execute ...
Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report--sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists. Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade--as well as an unprecedented rise in the use of'bots' to manipulate everything from elections to the news agenda and social media--the report is a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI. However, the report--"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation"--also recommends interventions to mitigate the threats posed by the malicious use of AI: The co-authors come from a wide range of organizations and disciplines, including Oxford University's Future of Humanity Institute; Cambridge University's Center for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; the Center for a New American Security, a U.S.-based bipartisan national security think-tank; and other organizations. The 100-page report identifies three security domains (digital, physical, and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted, and highly-efficient attacks.