Goto

Collaborating Authors

 domain-invariant representation





Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Neural Information Processing Systems

Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting, by learning domain-invariant representations. However, recent work has shown limitations of this approach when label distributions differ between the source and target domains. In this paper, we propose a new assumption, \textit{generalized label shift} ($\glsa$), to improve robustness against mismatched label distributions.


Appendix: On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources

Neural Information Processing Systems

Appendix C contains the proof for trade-off theorem discussed in Section 2.4 of our main In (4), Fubini theorem is invoked to swap the integral signs. This bound is novel since it relates loss on input space and data shift on feature space. Lemma 4. Given a source mixture and a target domain, we have the following d Now we are ready to prove the bound which motivate compressed DI representation. The objective function in Eq. (6) can be viewed as training the optimal hypothesis This estimation is unbiased estimation, i.e., To make it more consistent, we discuss under which condition the Hellinger loss is in the family defined in (10). Model We train a hypothesis ˆ f = ˆ h g and minimize the classification loss w.r.t.


Exploiting Domain-Specific Features to Enhance Domain Generalization

Neural Information Processing Systems

The domain-specific representation is optimized through the meta-learning framework to adapt from source domains, targeting a robust generalization on unseen domains. We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG.