Learning Bounds for Domain Adaptation
Blitzer, John, Crammer, Koby, Kulesza, Alex, Pereira, Fernando, Wortman, Jennifer
–Neural Information Processing Systems
Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which minimizing a non-uniform combination of source risks can achieve much lower target error than standard empirical risk minimization.
Neural Information Processing Systems
Feb-15-2020, 04:27:56 GMT
- Technology: