lrmf
8 SupplementaryMaterial
For the GLOW experiment we stacked three GLOW transformations at different scales eachwitheightaffinecoupling blocks spaced byactnorms andpermutations each parameterized byaCNN with twohidden layers with 512 filters each. In a recent arXiv submission, Arjovsky et al.[2] suggested that in the presence of an observable variability intheenvironmente(e.g. While this procedure workedondistributions that were very similar tobegin with, inthe majority of cases the log-likelihood fit toB did not provide informative gradients when evaluated on the transformed dataset, as the KL-divergence between distributions with disjoint supports is infinite. The code is available in lrmf_gradient_simulation.ipynb. LRMF objective(Eq 2) decreases over time and reaches zero when two datasets are aligned.
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
CooperativeDistributionAlignment viaJSDUpperBound
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to ashared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning.
- North America > United States > Texas (0.05)
- North America > United States > Wisconsin (0.05)
- Media (0.46)
- Leisure & Entertainment (0.46)
- Education (0.46)
- North America > United States > Texas (0.05)
- North America > United States > Wisconsin (0.05)
- Media (0.46)
- Leisure & Entertainment (0.46)
- Education (0.46)
Cooperative Distribution Alignment via JSD Upper Bound Wonwoong Cho Purdue University
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
8 Supplementary Material
Attached IPython notebooks were tested to work as expected in Colab. On replacing the Gaussian prior with a learned density in normalizing flows. As mentioned in the main paper, FFJORD LRMF performed on par with Real NVP version. The dynamics can be found in the Figure 10. Rightmost column shows LRMF convergence.
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)