Goto

Collaborating Authors

 pmd


Population Matching Discrepancy and Applications in Deep Learning

Neural Information Processing Systems

A differentiable estimation of the distance between two distributions based on samples is important for many deep learning tasks. One such estimation is maximum mean discrepancy (MMD). However, MMD suffers from its sensitive kernel bandwidth hyper-parameter, weak gradients, and large mini-batch size when used as a training objective. In this paper, we propose population matching discrepancy (PMD) for estimating the distribution distance based on samples, as well as an algorithm to learn the parameters of the distributions using PMD as an objective. PMD is defined as the minimum weight matching of sample populations from each distribution, and we prove that PMD is a strongly consistent estimator of the first Wasserstein metric. We apply PMD to two deep learning tasks, domain adaptation and generative modeling. Empirical results demonstrate that PMD overcomes the aforementioned drawbacks of MMD, and outperforms MMD on both tasks in terms of the performance as well as the convergence speed.






Population Matching Discrepancy and Applications in Deep Learning

Neural Information Processing Systems

A differentiable estimation of the distance between two distributions based on samples is important for many deep learning tasks. One such estimation is maximum mean discrepancy (MMD). However, MMD suffers from its sensitive kernel bandwidth hyper-parameter, weak gradients, and large mini-batch size when used as a training objective. In this paper, we propose population matching discrepancy (PMD) for estimating the distribution distance based on samples, as well as an algorithm to learn the parameters of the distributions using PMD as an objective. PMD is defined as the minimum weight matching of sample populations from each distribution, and we prove that PMD is a strongly consistent estimator of the first Wasserstein metric. We apply PMD to two deep learning tasks, domain adaptation and generative modeling. Empirical results demonstrate that PMD overcomes the aforementioned drawbacks of MMD, and outperforms MMD on both tasks in terms of the performance as well as the convergence speed.




aa1f5f73327ba40d47ebce155e785aaf-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all the reviewers for their thoughtful comments and their enthusiasm for our work. These results are consistent with those of Zoltowski et al. [2020], where they found Laplace EM compared Section 3. Segmenting the continuous latent states for each population (which is equivalent to imposing hard constraints On top of that, the "sticky" parameterization of discrete state transitions reveals which neural populations C. elegans offers an illustrative demonstration of the mp-srSLDS For example, we explore interactions between ganglia in Appendix C. Thanks again for spending the time to provide valuable feedback on our work.