How To Ensure Your Machine Learning Models Aren't Fooled - InformationWeek
All neural networks are susceptible to "adversarial attacks," where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks. We will look at a brief case study of face recognition systems and their potential vulnerabilities.
Apr-21-2021, 08:45:13 GMT
- Technology: