Leveraging Domain Relations for Domain Generalization
Yao, Huaxiu, Yang, Xinyu, Pan, Xinyi, Liu, Shengchao, Koh, Pang Wei, Finn, Chelsea
–arXiv.org Artificial Intelligence
Distribution shift is a major challenge in machine learning, as models often perform poorly during the test stage if the test distribution differs from the training distribution. In this paper, we focus on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on, and propose a new approach called D^3G. Unlike previous approaches that aim to learn a single model that is domain invariant, D^3G learns domain-specific models by leveraging the relations among different domains. Concretely, D^3G learns a set of training-domain-specific functions during the training stage and reweights them based on domain relations during the test stage. These domain relations can be directly derived or learned from fixed domain meta-data. Under mild assumptions, we theoretically proved that using domain relations to reweight training-domain-specific functions achieves stronger generalization compared to averaging them. Empirically, we evaluated the effectiveness of D^3G using both toy and real-world datasets for tasks such as temperature regression, land use classification, and molecule-protein interaction prediction. Our results showed that D^3G consistently outperformed state-of-the-art methods, with an average improvement of 10.6% in performance.
arXiv.org Artificial Intelligence
Feb-6-2023
- Country:
- Asia > Middle East (0.69)
- North America > United States (0.46)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: