Goto

Collaborating Authors

 adversarial accuracy




Appendix A Theory

Neural Information Processing Systems

In this section, we show the proofs of the results in the main body. Eq. (1) satisfies the triangle inequality, i.e., for any scoring functions For the second inequality, we prove it similarly. Before we present the proof of the theorem, we first provide some lemmas. By applying Lemma A.2, the following holds with probability at least 1 α: null R F). Thus we have: null R A.1, we can get that the margin loss satisfies the triangle inequality. By Lemma A.4, we have R By Theorem 4.4, the following holds for any Based on Theorem A.6, the following standard error bound for gradual AST can be derived similarly to Corollary 4.6.








b8ce47761ed7b3b6f48b583350b7f9e4-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for the encouraging feedback and detailed comments which we'll integrate into the next version. Why a noise-sensitive filter is learned by FGSM training? After running AutoAttack, we observe that it proportionally reduces the adversarial accuracy for all methods. Their exact goal is to make the loss surface smoother . R2: Line 118: How is the alpha step-size tuned for this experiments?