Goto

Collaborating Authors

 robust generalization gap





1ef91c212e30e14bf125e9374262401f-Supplemental.pdf

Neural Information Processing Systems

In this section, we provide more empricial evidence to identify the connection of the weight loss landscape and the robust generalization gap across learning rate schedules, model architectures, datasets,andthreatmodels. We adversarially train PreAct ResNet-18 with different learning rate schedules using the same experimental settings in Section 3. The learning curves are shown on the left column in Figure 7, where the whole training process can be split into twostages: the early stage with small robust generalization gap ( 10%) and the late stage with large robust generalization gap (> 10%). Meanwhile, the weight loss landscape also becomes sharp much later. Meanwhile, the weight loss landscape also keeps flat at 10-th epoch and starts tobecome sharper. C.4 TheConnectiononL2ThreatModel To further explore the universality of the connection, we additionally conduct experiments onL2 threatmodelinFigure10. In this section, we first provide the pseudo-code of the AWP-based vanilla advesraial trainig (ATAWP), and then describe how to satisfy the constraint of the perturbation size in Eq.(8) via the weightupdateinEq.



Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation

Neural Information Processing Systems

Our work can be seamlessly combined with models pretrained by different SSL frameworks without revising the learning objectives and helps to bridge the gap between SA T and A T. Our method also improves both robust