On the Adversarial Robustness of Out-of-distribution Generalization Models

Neural Information Processing Systems 

Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. Interestingly, we find that existing OOD generalization methods are vulnerable to adversarial attacks. This motivates us to study OOD adversarial robustness.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found