Adversarial Robustness through Random Weight Sampling

Neural Information Processing Systems 

Deep neural networks have been found to be vulnerable in a variety of tasks. Adversarial attacks can manipulate network outputs, resulting in incorrect predictions.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found