Out-of-Distribution Generalization via Risk Extrapolation (REx)
Krueger, David, Caballero, Ethan, Jacobsen, Joern-Henrik, Zhang, Amy, Binas, Jonathan, Priol, Remi Le, Courville, Aaron
–arXiv.org Artificial Intelligence
Generalizing outside of the training distribution is an open challenge for current machine learning systems. A weak form of out-of-distribution (OoD) generalization is the ability to successfully interpolate between multiple observed distributions. One way to achieve this is through robust optimization, which seeks to minimize the worst-case risk over convex combinations of the training distributions. However, a much stronger form of OoD generalization is the ability of models to extrapolate beyond the distributions observed during training. In pursuit of strong OoD generalization, we introduce the principle of Risk Extrapolation (REx). REx can be viewed as encouraging robustness over affine combinations of training risks, by encouraging strict equality between training risks. We show conceptually how this principle enables extrapolation, and demonstrate the effectiveness and scalability of instantiations of REx on various OoD generalization tasks. Our code can be found at https://github.com/capybaralet/REx_code_release.
arXiv.org Artificial Intelligence
Mar-2-2020
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Sweden
- North America
- Canada
- United States
- New Jersey > Hudson County
- Secaucus (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- New York > New York County
- New York City (0.04)
- Wisconsin > Dane County
- Madison (0.04)
- New Jersey > Hudson County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: