How can attackers abuse artificial intelligence? - Help Net Security

#artificialintelligence

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cybersecurity companies, and everything in between uses it. But a new report published by the SHERPA consortium – an EU project studying the impact of AI on ethics and human rights – finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning. The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.


AI may open dangerous new frontiers in geopolitics

#artificialintelligence

The dawn of truly artificial intelligences will provoke an international security crisis, according to F-Secure chief research officer and security industry heavyweight, Mikko Hypponen. Speaking to Computer Weekly in October 2019 during an event at the company's Helsinki headquarters, Hypponen said that although true AI is a long way off – in cyber security it is largely restricted to machine learning for threat modelling to assist human analysts – the potential danger is real, and should be considered today. "I believe the most likely war for superhuman intelligence to be generated will be through human brain simulators, which is really hard to do – it's going to take 20 to 30 years to get there," said Hypponen. "But if something like that, or some other mechanism of generating superhuman levels of intelligence, becomes a reality, it will absolutely become a catalyst for an international crisis. It will increase the likelihood of conflict."


Why AI will be inhuman

#artificialintelligence

According to F-Secure Vice President of Artificial Intelligence Matti Aksela, there's a common misconception that'advanced' AI should mimic human intelligence – an assumption Project Blackfin aims to challenge. "People's expectations that'advanced' machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do. Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do," said Aksela, Head of F-Secure's Artificial Intelligence Center of Excellence. "We created Project Blackfin to help us reach that next level of understanding about what AI can achieve." Project Blackfin is a research initiative conceptualised by Aksela's cross-disciplinary team of artificial intelligence and cyber security researchers, mathematicians, data scientists, machine learning experts, and engineers.


Getting a grasp on AI and Machine Learning in cyber security

#artificialintelligence

If you follow technology at all, it's pretty hard to avoid the hearing about "AI" and "machine learning." And it can be almost as difficult to understand what is actually being discussed when these words are used. "The term'AI' is thrown around so readily, and I think for many people, it conjures up the image of'artificial general intelligence,' or some form of self-thinking software," Andy Patel, Senior Researcher at F-Secure's Artificial Intelligence Center of Excellence, told me. "This, combined with massive hype and over-sensationalized or over-exaggerated headlines in the news, and claims from marketers, has caused a general lack of understanding of what machine learning really is right now, and what it is and isn't capable of." Andy attributes this common confusion about the use of data analysis for automated model building known as "machine learning" to a simple fact: most people get their information on the subject from the news.


Self-driving cars to be targeted by hackers

Daily Mail - Science & tech

First it was our computers, then it was our phones, and now experts have warned hackers will soon be targeting our cars. Self-driving car technology is improving so quickly that some experts believe it will be mainstream within the next five years, meaning hacking will be probably become a problem. A security expert has told MailOnline that cyber criminals may take control of a car and hold it ransom to extort money from owners. 'There's no question whether autonomous cars can be hacked or not,' Mikko Hypponen, chief research officer of cyber security firm F-secure told MailOnline. The Insurance Information Institute estimates that by 2030, 25 per cent of all cars sold will be autonomous.