landmark distribution
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
Reviews: Revisiting (\epsilon, \gamma, \tau) -similarity learning for domain adaptation
This paper is a theoretical look at domain adaptation / transfer learning problems through the lenses of similarity learning. The authors have extended an already established similarity learning theoretical framework to cases where the training and testing distributions differ. The authors rigorously prove the following in this paper: - A (\epsilon,\gamma)-good similarity for a problem in a source domain is also is an (\epsilon \epsilon', \gamma)-good similarity in a target domain, assuming the same landmark distribution on both the source and the target. In this case, the problem in the target domain becomes (\epsilon \epsilon' \epsilon'', \gamma) good. Both \epsilon' and \epsilon'' are formally derived in the paper.
Landmark Alternating Diffusion
Yeh, Sing-Yuan, Wu, Hau-Tieng, Talmon, Ronen, Tsui, Mao-Pei
Alternating Diffusion (AD) is a commonly applied diffusion-based sensor fusion algorithm. While it has been successfully applied to various problems, its computational burden remains a limitation. Inspired by the landmark diffusion idea considered in the Robust and Scalable Embedding via Landmark Diffusion (ROSELAND), we propose a variation of AD, called Landmark AD (LAD), which captures the essence of AD while offering superior computational efficiency. We provide a series of theoretical analyses of LAD under the manifold setup and apply it to the automatic sleep stage annotation problem with two electroencephalogram channels to demonstrate its application.
- Asia > Taiwan > Taiwan Province > Taipei (0.04)
- Oceania > Australia > Queensland > Brisbane (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.45)
Revisiting $(\epsilon, \gamma, \tau)$-similarity learning for domain adaptation
Dhouib, Sofiane, Redko, Ievgen
Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a $(\epsilon, \gamma, \tau)-$good similarity learning framework in the context of supervised classification and has been shown to have strong theoretical guarantees. In this paper, we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an $(\epsilon, \gamma)-$good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source distribution dominates the target one, then principally new domain adaptation learning bounds can be proved.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
Revisiting $(\epsilon, \gamma, \tau)$-similarity learning for domain adaptation
Dhouib, Sofiane, Redko, Ievgen
Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a $(\epsilon, \gamma, \tau)-$good similarity learning framework in the context of supervised classification and has been shown to have strong theoretical guarantees. In this paper, we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an $(\epsilon, \gamma)-$good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source distribution dominates the target one, then principally new domain adaptation learning bounds can be proved.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)