Goto

Collaborating Authors

 discogan





Reviews: Triangle Generative Adversarial Networks

Neural Information Processing Systems

Most importantly, I agree that the characterization of Triple GAN is somewhat misleading. The current paper should clarify that Triangle GAN fits a model to p_y(y x) rather than this density being required as given. The toy experiment should note that p_y(y x) in Triple GAN could be modeled as a mixture of Gaussians, although it is preferable that Triangle GAN does not require specifying this. The objective comes down to conditional GAN BiGAN/ALI. That is an intuitive and perhaps simple thing to try for the semi-supervised setting, but it's nice that this paper backs up the formulation with theory about behavior at optimality.



NAM: Non-Adversarial Unsupervised Domain Mapping

Hoshen, Yedid, Wolf, Lior

arXiv.org Machine Learning

Several methods were recently proposed for the task of translating images between domains without prior knowledge in the form of correspondences. The existing methods apply adversarial learning to ensure that the distribution of the mapped source domain is indistinguishable from the target domain, which suffers from known stability issues. In addition, most methods rely heavily on "cycle" relationships between the domains, which enforce a one-to-one mapping. In this work, we introduce an alternative method: Non-Adversarial Mapping (NAM), which separates the task of target domain generative modeling from the cross-domain mapping task. NAM relies on a pre-trained generative model of the target domain, and aligns each source image with an image synthesized from the target domain, while jointly optimizing the domain mapping function. It has several key advantages: higher quality and resolution image translations, simpler and more stable training and reusable target models. Extensive experiments are presented validating the advantages of our method.


One-Sided Unsupervised Domain Mapping

Benaim, Sagie, Wolf, Lior

Neural Information Processing Systems

In unsupervised domain mapping, the learner is given two unmatched datasets $A$ and $B$. The goal is to learn a mapping $G_{AB}$ that translates a sample in $A$ to the analog sample in $B$. Recent approaches have shown that when learning simultaneously both $G_{AB}$ and the inverse mapping $G_{BA}$, convincing mappings are obtained. In this work, we present a method of learning $G_{AB}$ without learning $G_{BA}$. This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint. Our entire code is made publicly available at~\url{https://github.com/sagiebenaim/DistanceGAN}.


ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching

Li, Chunyuan, Liu, Hao, Chen, Changyou, Pu, Yunchen, Chen, Liqun, Henao, Ricardo, Carin, Lawrence

arXiv.org Machine Learning

We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching. Within a framework of conditional entropy, we propose both adversarial and non-adversarial approaches to learn desirable matched joint distributions for unsupervised and supervised tasks. We unify a broad family of adversarial models as joint distribution matching problems. Our approach stabilizes learning of unsupervised bidirectional adversarial learning methods. Further, we introduce an extension for semi-supervised learning tasks. Theoretical results are validated in synthetic data and real-world applications.