Goto

Collaborating Authors

 controllable text-to-image generation


Reviews: Controllable Text-to-Image Generation

Neural Information Processing Systems

The paper is well-organized and written, which can be followed easily. In particular, instead of generating a new image from the text, the authors pay more attention to image manipulation based on the modified natural language description. For the word-level spatial and channel-wise attention driven generator: (1) The novelty and effectiveness of attentional generator may be limited. Specifically, the paper designs a word-level spatial and channel-wise attention driven generator, which has two attention parts (i.e. However, since the spatial attention is based on the method in AttnGAN [7], most contributions may lie on the additional channel-wise part.


Reviews: Controllable Text-to-Image Generation

Neural Information Processing Systems

This paper was reviewed by three expert reviewers and received three Weak Accept recommendations. After rebuttal, all the reviewers are positive about this paper, and agree that the paper is generally well written, the considered problem is interesting, and the results are impressive. Nevertheless, on the other hand, both R1 and R2 commented that it is difficult to judge the significance of the results due to lack of sufficient ablation studies and the fact that CUB is saturated. Besides, R1 had concerns regarding the novelty of the paper, and R3 left several detailed comments for the authors to deal with. The rebuttal partially solves the reviewers' concerns.


Controllable Text-to-Image Generation

Neural Information Processing Systems

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions.


Controllable Text-to-Image Generation

Li, Bowen, Qi, Xiaojuan, Lukasiewicz, Thomas, Torr, Philip

Neural Information Processing Systems

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions.