Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cybersecurity companies, and everything in between uses it. But a new report published by the SHERPA consortium – an EU project studying the impact of AI on ethics and human rights – finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning. The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
If you follow technology at all, it's pretty hard to avoid the hearing about "AI" and "machine learning." And it can be almost as difficult to understand what is actually being discussed when these words are used. "The term'AI' is thrown around so readily, and I think for many people, it conjures up the image of'artificial general intelligence,' or some form of self-thinking software," Andy Patel, Senior Researcher at F-Secure's Artificial Intelligence Center of Excellence, told me. "This, combined with massive hype and over-sensationalized or over-exaggerated headlines in the news, and claims from marketers, has caused a general lack of understanding of what machine learning really is right now, and what it is and isn't capable of." Andy attributes this common confusion about the use of data analysis for automated model building known as "machine learning" to a simple fact: most people get their information on the subject from the news.
First it was our computers, then it was our phones, and now experts have warned hackers will soon be targeting our cars. Self-driving car technology is improving so quickly that some experts believe it will be mainstream within the next five years, meaning hacking will be probably become a problem. A security expert has told MailOnline that cyber criminals may take control of a car and hold it ransom to extort money from owners. 'There's no question whether autonomous cars can be hacked or not,' Mikko Hypponen, chief research officer of cyber security firm F-secure told MailOnline. The Insurance Information Institute estimates that by 2030, 25 per cent of all cars sold will be autonomous.
Three leading cyber security researchers sat down for a panel at the IP Expo in London yesterday to air their catastrophic predictions about the biggest infosec threats in the coming years. Nightmare scenarios regarding autonomous killer robots, terrorists taking control of fleets of autonomous vehicles, AI writing its own malware, and the threat posed by IoT devices all quickly followed. Rik Ferguson, global VP of security research at Trend Micro started off by stating that an area "ripe for innovation in the security and criminal landscape" is artificial intelligence and machine learning. "One thing I find scary is the fact that we have a petition to the UN from 120 leading academics to outlaw autonomous weaponry," he said. "We are already in Skynet, that is the world we live in.