Cracking the Code on Adversarial Machine Learning

#artificialintelligence 

The vulnerabilities of machine learning models open the door for deceit, giving malicious operators the opportunity to interfere with the calculations or decision making of machine learning systems. Scientists at the Army Research Laboratory, specializing in adversarial machine learning, are working to strengthen defenses and advance this aspect of artificial intelligence. Often, in a data set, corrupted inputs or an adversarial attack enters a machine learning model undetected. Adversaries also impact a model whether or not they know the machine learning algorithm in use, training a substitute machine learning model for use on a "victim" model. Corruption can even occur on sophisticated machine learning models trained with an abundance of data to perform critical tasks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found