deep image manipulation
Swapping Autoencoder for Deep Image Manipulation
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image into two independent components and enforce that any swapped combination maps to a realistic image. In particular, we encourage the components to represent structure and texture, by enforcing one component to encode co-occurrent patch statistics across different parts of the image. As our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome. As a result, our method enables us to manipulate real input images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic. Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
Review for NeurIPS paper: Swapping Autoencoder for Deep Image Manipulation
The main idea of this paper is disentangling structure and texture by using an auto-encoder like structure. However, it is not a new idea and indeed studied in many previous methods. Though the authors try to differentiate this method to existing methods from the supervise/unsupervise aspect in the related work part, it is still not technically impressive to me. Moreover, there is no comparison to these disentangle methods. Maybe these methods cannot be directly applied to the tasks mentioned in this paper, but I think it should not be difficult.
Review for NeurIPS paper: Swapping Autoencoder for Deep Image Manipulation
This is not acceptable and some papers have been desk-rejected for doing the same thing. These additional results have to be moved elsewhere.* The reviewers' opinions on this paper diverge even after considering the rebuttal and discussing. The meta-review is thus unusually detailed. The paper proposes an approach to image editing by disentangling the structure and texture using an autoencoder with the latent space decomposed into two parts - corresponding to texture and structure.
Swapping Autoencoder for Deep Image Manipulation
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image into two independent components and enforce that any swapped combination maps to a realistic image. In particular, we encourage the components to represent structure and texture, by enforcing one component to encode co-occurrent patch statistics across different parts of the image. As our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome.