Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$

Croce, Francesco, Hein, Matthias

arXiv.org Machine Learning 

In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found