Goto

Collaborating Authors

 perturbation




Characterization of Overfitting in Robust Multiclass Classification

Neural Information Processing Systems

Nonetheless, modern machine learning is adaptive in its nature. Prior information about a model's performance on the test set inevitably influences





Boosting Adversarial Transferability by Achieving Flat Local Maxima

Neural Information Processing Systems

Specifically, we randomly sample an example and adopt a first-order procedure to approximate the Hessian/vector product, which makes computing more efficient by interpolating two neighboring gradients.



Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

Neural Information Processing Systems

However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).


Robust low-rank training via approximate orthonormal constraints

Neural Information Processing Systems

By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices.