AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks

MIT Technology Review

A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle. The contest, which will play out over the next five months, is run by Kaggle, a platform for data science competitions. It will pit researchers' algorithms against one another in attempts to confuse and trick each other, the hope being that this combat will yield insights into how to harden machine-learning systems against future attacks. "It's a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled," says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning. The contest will have three components.


Cracking the Code on Adversarial Machine Learning

#artificialintelligence

The vulnerabilities of machine learning models open the door for deceit, giving malicious operators the opportunity to interfere with the calculations or decision making of machine learning systems. Scientists at the Army Research Laboratory, specializing in adversarial machine learning, are working to strengthen defenses and advance this aspect of artificial intelligence. Often, in a data set, corrupted inputs or an adversarial attack enters a machine learning model undetected. Adversaries also impact a model whether or not they know the machine learning algorithm in use, training a substitute machine learning model for use on a "victim" model. Corruption can even occur on sophisticated machine learning models trained with an abundance of data to perform critical tasks.


Adversarial Vision Challenge

arXiv.org Machine Learning

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that was accepted in the competition track of 32nd Conference on Neural Information Processing Systems (NIPS 2018).


Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods

arXiv.org Machine Learning

Cybersecurity also benefits from ML and DL methods for various types of applications. These methods however are susceptible to security attacks. The adversaries can exploit the training and testing data of the learning models or can explore the workings of those models for launching advanced future attacks. The topic of adversarial security attacks and perturbations within the ML and DL domains is a recent exploration and a great interest is expressed by the security researchers and practitioners. The literature covers different adversarial security attacks and perturbations on ML and DL methods and those have their own presentation styles and merits. A need to review and consolidate knowledge that is comprehending of this increasingly focused and growing topic of research; however, is the current demand of the research communities. In this review paper, we specifically aim to target new researchers in the cybersecurity domain who may seek to acquire some basic knowledge on the machine learning and deep learning models and algorithms, as well as some of the relevant adversarial security attacks and perturbations.


Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward

arXiv.org Machine Learning

Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation---which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications---will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings.