Artificial intelligence (AI) is proving to be one of the most influential and game-changing technology advancements in the business world. As more and more enterprises go digital, companies all over the globe are constantly engineering new ways to implement AI-based functions into practically every platform and software tool at their disposal. It should come as no surprise, then, that AI is affecting cybersecurity – but it's affecting it in both positive and negative ways. Cybercrime is a massively lucrative business, and one the greatest threats to every company in the world. Cybersecurity Ventures' Official 2019 Annual Cybercrime Report predicts cybercrime will cost the world $6 trillion annually by 2021 – up from $3 trillion in 2015.
When it comes to identifying existential threats posed by technological innovations, the popular imagination summons visions of Terminator, The Matrix, and I, Robot -- dystopias ruled by robot overlords who exploit and exterminate people en masse. In these speculative futures, a combination of super-intelligence and evil intentions leads computers to annihilate or enslave the human race.
Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report--sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists. Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade--as well as an unprecedented rise in the use of'bots' to manipulate everything from elections to the news agenda and social media--the report is a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI. However, the report--"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation"--also recommends interventions to mitigate the threats posed by the malicious use of AI: The co-authors come from a wide range of organizations and disciplines, including Oxford University's Future of Humanity Institute; Cambridge University's Center for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; the Center for a New American Security, a U.S.-based bipartisan national security think-tank; and other organizations. The 100-page report identifies three security domains (digital, physical, and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted, and highly-efficient attacks.
The world must prepare for potential malicious use of artificial intelligence (AI) by rogue states, criminals and terrorists, according to a report by a group of 26 security experts. Forecasting rapid growth in cybercrime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of bots to manipulate everything from elections to the news agenda and social media, the report calls for governments and corporations worldwide to address the danger inherent in the myriad applications of AI. The report – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation – also recommends interventions to mitigate the threats posed by the malicious use of AI. The report says that AI has many positive applications, but it is a dual-use technology and AI researchers and engineers should be proactive about the potential for its misuse. Policymakers and technical researchers need to work together now to understand and prepare for the malicious use of AI, according to the authors.