Appendix: OnLearningDomain-Invariant RepresentationsforTransferLearningwithMultiple Sources
–Neural Information Processing Systems
Let ˆf: X 7 Y where ˆf = ˆh g with g: X 7 Z and ˆh: Z 7 Y . Corollary 2. Consider a domainD = (P,f) with data distributionP and ground-truth labeling functionf. A hypothesis is ˆf: X 7 Y, where ˆf = ˆh g withg: X 7 Z and ˆh: Z 7 Y . Here, thiskind ofbound isdeveloped using data distributionPoninput space andlabeling functionf from input tolabel space, which arenot convenient in understanding representation learning, sincePT,PS are data nature and therefore fixed. Theorem 3. (Theorem 1 in the main paper) Consider a mixture of source domainsDπ = Next, we relate the loss on targetDTg to hybrid domain Dhyg, which differs only at the feature marginals. In other words, the equality happens when all distributions are the sameQ1=...=QC.
Neural Information Processing Systems
Feb-11-2026, 17:07:21 GMT
- Technology: