Review for NeurIPS paper: Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
–Neural Information Processing Systems
This paper proves that adversarial training of over-parameterized neural networks converges to a robust solution. Specifically, the paper studies two-layer ReLU networks with width that is polynomial in the input dimension, d, the number of training points, n, and the inverse of the robustness parameter, 1/\epsilon. The proof is by construction; an algorithm is proposed that, in poly(d, n, 1/\epsilon) iterations, finds a network with poly(d, n, 1/\epsilon) width that is \epsilon-robust. Adversarial training is an important and rapidly expanding field of ML. This paper fills in some gaps w.r.t.
Neural Information Processing Systems
Jan-21-2025, 07:34:58 GMT
- Technology: