Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
Chuang, Ching-Yao, Torralba, Antonio, Jegelka, Stefanie
When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance. In this work, we aim to better estimate a model's performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels. Since the error of the resulting risk estimate depends on the target risk of the proxy model, we study generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our approach (1) enables self-tuning of domain adaptation models, and (2) accurately estimates the target error of given models under distribution shift. Other applications include model selection, deciding early stopping and error detection.
Jul-6-2020
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Massachusetts (0.14)
- Europe > Austria
- Genre:
- Research Report (0.82)
- Technology: