Goto

Collaborating Authors

Co-regularization Based Semi-supervised Domain Adaptation

Neural Information Processing Systems

This paper presents a co-regularization based approach to semi-supervised domain adaptation. Our proposed approach (EA++) builds on the notion of augmented space (introduced in EASYADAPT (EA) [1]) and harnesses unlabeled data in target domain to further enable the transfer of information from source to target. This semi-supervised approach to domain adaptation is extremely simple to implement and can be applied as a pre-processing step to any supervised learner. Our theoretical analysis (in terms of Rademacher complexity) of EA and EA++ show that the hypothesis class of EA++ has lower complexity (compared to EA) and hence results in tighter generalization bounds. Experimental results on sentiment analysis tasks reinforce our theoretical findings and demonstrate the efficacy of the proposed method when compared to EA as well as a few other baseline approaches.


Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation

AAAI Conferences

To further incorporate unlabeled samples with labeled ones For well generalization capability, it is required to collect in model learning, the kernel prediction framework is developed and label plenty of training samples following the same into semi-supervised setting by formulating a regularized distribution of test samples, however, which is extremely expensive Fredholm integral equation (Que, Belkin, and Wan in practical applications. To relieve the contradiction 2014). Although this development is proven theoretically between generalization performance and label cost, domain and empirically to be effective in noise suppression, the performance adaptation (Pan and Yang 2010) has been proposed to transfer heavily depends on the choice of a single predefined knowledge from a relevant but different source domain kernel function. The reason is that its solution is based with sufficient labeled data to the target domain. It gains increased on Representer Theorem (Scholkopf and Smola 2001) in the importance in many applied areas of machine learning Reproducing Kernel Hilbert Space (RKHS) induced by the (Daumé III 2007; Pan et al. 2011), including natural language kernel. More importantly, the kernel methods are not developed processing, computer vision and WiFi localization.


Sample-to-Sample Correspondence for Unsupervised Domain Adaptation

arXiv.org Machine Learning

The assumption that training and testing samples are generated from the same distribution does not always hold for real-world machine-learning applications. The procedure of tackling this discrepancy between the training (source) and testing (target) domains is known as domain adaptation. We propose an unsupervised version of domain adaptation that considers the presence of only unlabelled data in the target domain. Our approach centers on finding correspondences between samples of each domain. The correspondences are obtained by treating the source and target samples as graphs and using a convex criterion to match them. The criteria used are first-order and second-order similarities between the graphs as well as a class-based regularization. We have also developed a computationally efficient routine for the convex optimization, thus allowing the proposed method to be used widely. To verify the effectiveness of the proposed method, computer simulations were conducted on synthetic, image classification and sentiment classification datasets. Results validated that the proposed local sample-to-sample matching method out-performs traditional moment-matching methods and is competitive with respect to current local domain-adaptation methods.


Self-Taught Support Vector Machine

arXiv.org Machine Learning

In this paper, a new approach for classification of target task using limited labeled target data as well as enormous unlabeled source data is proposed which is called self-taught learning. The target and source data can be drawn from different distributions. In the previous approaches, covariate shift assumption is considered where the marginal distributions p(x) change over domains and the conditional distributions p(y|x) remain the same. In our approach, we propose a new objective function which simultaneously learns a common space T(.) where the conditional distributions over domains p(T(x)|y) remain the same and learns robust SVM classifiers for target task using both source and target data in the new representation. Hence, in the proposed objective function, the hidden label of the source data is also incorporated. We applied the proposed approach on Caltech-256, MSRC+LMO datasets and compared the performance of our algorithm to the available competing methods. Our method has a superior performance to the successful existing algorithms.


Learning Bounds for Domain Adaptation

Neural Information Processing Systems

Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which minimizing a non-uniform combination of source risks can achieve much lower target error than standard empirical risk minimization.