Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments
Chen, Yining, Rosenfeld, Elan, Sellke, Mark, Ma, Tengyu, Risteski, Andrej
Domain generalization aims at performing well on unseen environments using labeled data from a limited number of training environments [Blanchard et al., 2011]. In contrast to transfer learning or domain adaptation, domain generalization assumes that neither labeled or unlabeled data from the test environments is available at training time. For example, a medical diagnostic system may have access to training datasets from only a few hospitals, but will be deployed on test cases from many other hospitals [Choudhary et al., 2020]; a traffic scene semantic segmentation system may be trained on data from some specific weather conditions, but will need to perform well under other conditions [Yue et al., 2019]. There are many algorithms for domain generalization, including Invariant Risk Minimization (IRM) [Arjovsky et al., 2019] and several variants. IRM is inspired by the principle of invariance of causal mechanisms [Pearl, 2009], which, under sufficiently strong assumptions, allows for provable identifiability of the features that achieve minimax domain generalization [Peters et al., 2016, Heinze-Deml et al., 2018]. However, empirical results for these algorithms are mixed; Gulrajani and Lopez-Paz [2021], Aubin et al. [2021] present experimental evidence that these methods do not consistently outperform ERM for either realistic or simple linear data models.
Jun-18-2021
- Country:
- North America > United States (0.68)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine > Diagnostic Medicine (0.68)
- Technology: