This is part of our Road Trip 2017 summer series "The Smartest Stuff," about how innovators are thinking up new ways to make you -- and the world around you -- smarter. A Las Vegas driver asks me this after I tell him I'm headed to Defcon at Caesars Palace. All week, a cloud of paranoia looms over Las Vegas, as hackers from around the world swarm Sin City for Black Hat and Defcon, two back-to-back cybersecurity conferences taking place in the last week of July. At Caesars Palace, where Defcon is celebrating its 25th anniversary, the UPS store posts a sign telling guests it won't accept printing requests from USB thumb drives. You can't be too careful with all those hackers in town.
Let's start by dispelling the most common misconception: There is very little if any true artificial intelligence (AI) being incorporated within enterprise security software. The fact that the term comes up frequently is largely to do with marketing, and very little to do with the technology. Pure AI is about reproducing cognitive abilities. That said, machine learning (ML), one of many subsets of artificial intelligence, is being baked into some security software. But even the term machine learning may be employed somewhat optimistically.
Artificial intelligence is one of the most influential forces in information technology. It can help drive cars, fly unmanned aircraft and protect networks. But artificial intelligence also can be a dark force, one that adversaries use to learn new ways to hack systems, shut down networks and deny access to crucial information. The challenge is to prepare for a future where autonomous cyber attacks powered by artificial intelligence (AI) will threaten cyberspace and could endanger human life. This prospect is so significant that the Japanese Cabinet Secretariat tasked with developing the country's cybersecurity initiatives has created a research and development focus group to craft plans to counter cybersecurity threats, including those designed with AI.
Alan Turing is famous for several reasons, one of which is that he cracked the Nazis' seemingly unbreakable Enigma machine code during World War II. Later in life, Turing also devised what would become known as the Turing test for determining whether a computer was "intelligent" -- what we would now call artificial intelligence (AI). Turing believed that if a person couldn't tell the difference between a computer and a human in a conversation, then that computer was displaying AI. AI and information security have been intertwined practically since the birth of the modern computer in the mid-20th century. For today's enterprises, the relationship can generally be broken down into three categories: incident detection, incident response, and situational awareness -- i.e., helping a business understand its vulnerabilities before an incident occurs.
You may have seen the words'artificial intelligence' and'machine learning' widely used in the technology industry at the moment, and their appearances are no less prominent in cybersecurity. ABI Research predicts that machine learning in cybersecurity will help boost intelligence, analytics, and big data spending to US$96 billion by 2021. "We are in the midst of an artificial intelligence (AI) security revolution," says ABI Research analyst Dimitrios Pavlakis. "This will drive machine learning solutions to soon emerge as the new norm beyond security information and event management (SIEM) and ultimately displace a large portion of traditional AV, heuristics, and signature-based systems within the next five years." Beyond the numbers and the terminology, there is a simple question: What does machine learning do for cybersecurity, anyway?
The AI Times is a weekly newsletter covering the biggest AI, machine learning, big data, and automation news from around the globe. If you want to read A I before anyone else, make sure to subscribe using the form at the bottom of this page. The five-year research partnership with the University of Toronto will allow LG to build on its Open Platform-Open Partnership-Open Connectivity strategy to expand the AI ecosystem. Forecasts show that AI will underpin many of the future advances -- and leaps -- in business productivity. Delve into how algorithms, data and new workflows will ensure your future AI success.
TechRepublic's Dan Patterson sat down with Caleb Barlow, IBM Security Vice President to discuss how AI, IoT, and big data will shape the future of cybersecurity. The following is an edited transcript of the interview. Dan Patterson: What is the impact of the Internet of Things (IoT), the data that may or may not be wiped from the GDPR, and the emergence of artificial intelligence (AI), machine learning? If we look at all of those trends combined, each one of those is a macro trend that will have its own lifecycle. But they are interwoven: IoT, machine learning, and security.
While the debate about Artificial Intelligence (AI) and augmented reality rages, virtual terrorists--those who operate primarily on the Dark Web--are getting smarter and thinking of new ways to benefit from both, creating methods to operate autonomously in this brave new world. Malware is being designed with adaptive, success-based learning to improve the accuracy and efficacy of cyberattacks. The coming generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next, behaving like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection. This next generation of malware uses code that is a precursor to AI, replacing traditional "if not this, then that" code logic with more complex decision-making trees. Autonomous malware operates much like branch prediction technology, designed to guess which branch of a decision tree a transaction will take before it is executed.
When I walked around the exhibition floor at this week's massive Black Hat cybersecurity conference in Las Vegas, I was struck by the number of companies boasting about how they are using machine learning and artificial intelligence to help make the world a safer place. But some experts worry vendors aren't paying enough attention to the risks associated with relying heavily on these technologies. "What's happening is a little concerning, and in some cases even dangerous," warns Raffael Marty of security firm Forcepoint. The security industry's hunger for algorithms is understandable. It's facing a tsunami of cyberattacks just as the number of devices being hooked up to the internet is exploding.
New research from ESET reveals that three in four IT decision makers (75%) believe that AI and ML are the silver bullet to solving their cybersecurity challenges. In the past year, the amount of content published in marketing materials, media and social media on the role of AI in cybersecurity has grown enormously. ESET surveyed 900 IT decision makers across the US, UK and Germany on their opinions and attitudes to AI and ML in response to this growing hype. The findings showed that US IT decision makers are most likely to consider the technologies as a panacea to solve their cybersecurity challenges, compared to their European counterparts – 82% compared to 67% in the UK and 66% in Germany. The majority of respondents said that AI and ML would help their organization detect and respond to threats faster (79%) and help solve a skills shortage (77%).