Review for NeurIPS paper: Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

Neural Information Processing Systems 

All 4 reviewers support acceptance for the contribution. I believe the contribution is original and intriguing enough to merit a spotlight. This summary from R4 shows how the work in this paper opens new possibilities in NLP, complementing powerful adaptable models such as GPT-3. "This paper shows that it is possible to adapt pretrained language models (LMs) on-the-fly based on natural language text in order to correct the model's behavior. When an LM would answer a question incorrectly, the authors supplement the model with a hint or relevant piece of evidence in the form of natural language text and find that the model is then able to produce the correct answer. This results are a proof of concept that large, black-box LMs can be adapted/corrected in a natural way / potentially by non-expert users of the system, simply by providing relevant natural language text."