A Adversarial Attack Given a natural example x

Neural Information Processing Systems 

Here, we only name a few. B.1 Pseudo-code of the Visualization Method As shown in Algorithm 1 for the visualization of weight loss landscape, we firstly sample a random Then, we apply the "filter normalization" technique (Line Thus, we adopt the 1-D visualization in most cases. We adversarially train PreAct ResNet-18 with different learning rate schedules using the same experimental settings in Section 3. The learning curves are shown on the left column in Figure 7, where the whole training process can be split into two stages: the early stage with small robust generalization gap ( The weight loss landscape becomes sharper correspondingly. The cyclic schedule starts to significantly enlarge the gap much later, almost after the 175-th epoch with lr < 0. 16 The previous experiments are all based on PreAct ResNet-18. The same experimental settings as Section 3 are adopted and the results are shown in Figure 8.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found