On the Adversarial Robustness of Out-of-distribution Generalization Models
–Neural Information Processing Systems
Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. Interestingly, we find that existing OOD generalization methods are vulnerable to adversarial attacks. This motivates us to study OOD adversarial robustness.
Neural Information Processing Systems
Oct-9-2025, 09:00:10 GMT
- Country:
- Asia
- China > Hubei Province
- Wuhan (0.04)
- Middle East > Jordan (0.04)
- China > Hubei Province
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- New Mexico > Bernalillo County > Albuquerque (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.45)
- Industry:
- Information Technology (0.34)
- Technology: