Appendix: On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
–Neural Information Processing Systems
Appendix C contains the proof for trade-off theorem discussed in Section 2.4 of our main In (4), Fubini theorem is invoked to swap the integral signs. This bound is novel since it relates loss on input space and data shift on feature space. Lemma 4. Given a source mixture and a target domain, we have the following d Now we are ready to prove the bound which motivate compressed DI representation. The objective function in Eq. (6) can be viewed as training the optimal hypothesis This estimation is unbiased estimation, i.e., To make it more consistent, we discuss under which condition the Hellinger loss is in the family defined in (10). Model We train a hypothesis ˆ f = ˆ h g and minimize the classification loss w.r.t.
Neural Information Processing Systems
Nov-16-2025, 00:02:14 GMT
- Technology: