Goto

Collaborating Authors

 deepi2i


DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

Neural Information Processing Systems

Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the bottom layers and (b) semantic information extracted from the top layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs.



Supplementary Material DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

Neural Information Processing Systems

BigGAN to learn more detailed network information. The learning rate of the generator is 0.0001, and the one of the encoder, adaptor and discriminator is 0.0004 with exponential decay We also evaluate our method using fewer animal faces. Interpolation by keeping the input image fixed while interpolating between two class embeddings. The first column is the input images, while the remaining columns are the interpolated results. Further results on the Animal faces dataset.


Figure 19: (left) Comparison with StarGAN v2, DRIT++, and ablation of reconstruction loss, (middle)

Neural Information Processing Systems

Note En.: encoder, Gen.: generator, Dis: discriminator We will improve related work with mentioned papers. BigGAN) which has not been applied to I2I before. We outperform them on all 4 metrics. BigGAN-like architectures have not been explored for I2I (contr. However, currently no evaluation metrics exist.





DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

Neural Information Processing Systems

Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the bottom layers and (b) semantic information extracted from the top layers.