Common Q1: Theoretical justification on why A WP works
–Neural Information Processing Systems
Common Q1: Theoretical justification on why A WP works. Based on previous work on P AC-Bayes bound (Neyshabur et al., NeurIPS 2017), in adversarial training, let R#1 Q1: The weights are constantly perturbed in the worst case, the model may find it difficult to learn. R#1 Q2: How do the baseline methods that do implicit weight perturbations differ from A WP? We did not claim that "baseline methods do the implicit weight perturbations". R#1 Q3: What is the difference of weights learned by A T -A WP and vanilla A T? R#2 Q1: Only CIF AR-10 and single neural networks are tested. We have tested several network architectures and datasets in the main body and appendix, e.g., PreAct ResNet-18, R#2 Q2: In Figure 1, the α value in the loss landscape is embed into training or post-training?
Neural Information Processing Systems
Oct-2-2025, 09:51:31 GMT
- Technology: