Goto

Collaborating Authors

AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks

MIT Technology Review

A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle. The contest, which will play out over the next five months, is run by Kaggle, a platform for data science competitions. It will pit researchers' algorithms against one another in attempts to confuse and trick each other, the hope being that this combat will yield insights into how to harden machine-learning systems against future attacks. "It's a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled," says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning. The contest will have three components.


Adversarial Vision Challenge

arXiv.org Machine Learning

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that was accepted in the competition track of 32nd Conference on Neural Information Processing Systems (NIPS 2018).


Cracking the Code on Adversarial Machine Learning

#artificialintelligence

The vulnerabilities of machine learning models open the door for deceit, giving malicious operators the opportunity to interfere with the calculations or decision making of machine learning systems. Scientists at the Army Research Laboratory, specializing in adversarial machine learning, are working to strengthen defenses and advance this aspect of artificial intelligence. Often, in a data set, corrupted inputs or an adversarial attack enters a machine learning model undetected. Adversaries also impact a model whether or not they know the machine learning algorithm in use, training a substitute machine learning model for use on a "victim" model. Corruption can even occur on sophisticated machine learning models trained with an abundance of data to perform critical tasks.


Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods

arXiv.org Machine Learning

Cybersecurity also benefits from ML and DL methods for various types of applications. These methods however are susceptible to security attacks. The adversaries can exploit the training and testing data of the learning models or can explore the workings of those models for launching advanced future attacks. The topic of adversarial security attacks and perturbations within the ML and DL domains is a recent exploration and a great interest is expressed by the security researchers and practitioners. The literature covers different adversarial security attacks and perturbations on ML and DL methods and those have their own presentation styles and merits. A need to review and consolidate knowledge that is comprehending of this increasingly focused and growing topic of research; however, is the current demand of the research communities. In this review paper, we specifically aim to target new researchers in the cybersecurity domain who may seek to acquire some basic knowledge on the machine learning and deep learning models and algorithms, as well as some of the relevant adversarial security attacks and perturbations.


Unrestricted Adversarial Examples

arXiv.org Machine Learning

We introduce a two-player contest for evaluating the safety and robustness of machine learning systems, with a large prize pool. Unlike most prior work in ML robustness, which studies norm-constrained adversaries, we shift our focus to unconstrained adversaries. Defenders submit machine learning models, and try to achieve high accuracy and coverage on non-adversarial data while making no confident mistakes on adversarial inputs. Attackers try to subvert defenses by finding arbitrary unambiguous inputs where the model assigns an incorrect label with high confidence. We propose a simple unambiguous dataset ("bird-or- bicycle") to use as part of this contest. We hope this contest will help to more comprehensively evaluate the worst-case adversarial risk of machine learning models.