historical encoder
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.36)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.36)
Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data Supplemental Materials Anonymous Author(s) Affiliation Address email
Maximum likelihood (ML) is a concept to describe the theoretic insights of clustering algorithms. As it is not easy to optimize Eq.14 directly, we employ a surrogate function to lower-bound the Please note the notation " t m" shows that the k is encoded by a historical encoder. In practice, we achieve Eq. 9 by minimizing a historical contrastive instance discrimination loss: Please note that Eq. 10 is an instance of Eq. 9. The two equations look different due to: 1) Eq. 10 Proposition 2 The HCID is convergent under certain conditions. We have illustrated in Section A.1 that the inequality in Eq.11 holds with equality if One possible way to achieve Eq. 13 is to conduct gradient descent by minimizing the historical Different from the classical expectation maximization (mentioned in Section A.1) that consists It can be observed that Eq. 16 is the same as Eq. 15 except involving an extra weighting element In the following, we show the optimization of Eq. 16 is a CEM process.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.36)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.36)