Review for NeurIPS paper: Unsupervised Text Generation by Learning from Search

Neural Information Processing Systems 

Weaknesses: (1) This paper claims the method as a "generic unsupervised text generation framework", but only evaluates on relatively easy tasks like paraphrase generation and formality style transfer where many words are shared between the input and output, and there are other very standard and representative sequence generation tasks like machine translation or summarization. The related work section also only discusses literature on these two tasks. Thus I feel like the paper (especially the intro) is a bit over-claiming -- the paper needs to either include other standard tasks to make the current claim valid, or downplay the claim and discuss the limitations of the proposed method in a separate (sub)section. For example, I would expect it is much more difficult for SA to explore the output space for machine translation, summarization, dialogue, etc., which means the proposed method may not be "a generic framework". I think paraphrasing and formality transfer are very special cases where SA can work well, thus I can see many limitations of the proposed method unless the authors demonstrate enough experimental evidence.