Appendix A Discussion of Related Works 16 B Proofs of Theorems and Analysis of Bounds 18 B.1 Proof of Theorem 4.2
–Neural Information Processing Systems
Comparison to Ye et al. (2021) Ye et al. (2021) derive a generalization bound for the domain generalization problem in terms of variation across features. In comparison, we address the problem of learning with an unforeseen adversary and define unforeseen adversarial generalizability. Our generalization bound using variation is an instantiation of our generalizability framework. In their proposed algorithm, perceptual adversarial training (PAT), they combine standard adversarial training with adversarial examples generated via their LPIPS bounded attack method. In terms of terminology introduced in our paper, the Laidlaw et al. (2021) improve the choice of source threat model while using an existing learning algorithm (adversarial training). Meanwhile, our work takes the perspective of having a fixed source threat model and improving the learning algorithm. This allows us to combine our approach with various source threat models including the attacks used by Laidlaw et al. (2021) in PAT (see Appendix E.10).
Neural Information Processing Systems
May-29-2025, 08:59:06 GMT
- Technology: