Goto

Collaborating Authors

 threatmodel


FormulatingRobustnessAgainstUnforeseenAttacks

Neural Information Processing Systems

Our bound addresses the second question; it suggests that learning algorithms that bias towards models with small variation across the source threat model exhibit smaller drop in robustness to particularunforeseenattacks.


1ef91c212e30e14bf125e9374262401f-Supplemental.pdf

Neural Information Processing Systems

In this section, we provide more empricial evidence to identify the connection of the weight loss landscape and the robust generalization gap across learning rate schedules, model architectures, datasets,andthreatmodels. We adversarially train PreAct ResNet-18 with different learning rate schedules using the same experimental settings in Section 3. The learning curves are shown on the left column in Figure 7, where the whole training process can be split into twostages: the early stage with small robust generalization gap ( 10%) and the late stage with large robust generalization gap (> 10%). Meanwhile, the weight loss landscape also becomes sharp much later. Meanwhile, the weight loss landscape also keeps flat at 10-th epoch and starts tobecome sharper. C.4 TheConnectiononL2ThreatModel To further explore the universality of the connection, we additionally conduct experiments onL2 threatmodelinFigure10. In this section, we first provide the pseudo-code of the AWP-based vanilla advesraial trainig (ATAWP), and then describe how to satisfy the constraint of the perturbation size in Eq.(8) via the weightupdateinEq.