Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints Maura Pintor

Neural Information Processing Systems 

Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found