How To Ensure Your Machine Learning Models Aren't Fooled - InformationWeek

#artificialintelligence 

All neural networks are susceptible to "adversarial attacks," where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks. We will look at a brief case study of face recognition systems and their potential vulnerabilities.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found