Review for NeurIPS paper: Greedy inference with structure-exploiting lazy maps

Neural Information Processing Systems 

Additional Feedback: ### POST AUTHOR FEEDBACK ### I am raising my score as the authors have done a good job of addressing my feedback and the other reviews were favourable. I like the idea of intelligently reducing a higher-dimensional problem to a series of lower-dimensional problems, the adaptive error bounds on the approximation, and the map-learning procedure which involves more than just defining a loss function and blindly optimizing. However, I also have some comments / questions which, if addressed, would very much solidify this paper's contribution to the field of machine learning in my opinion. As mentioned a few times already, I would like some clarity on Proposition 3. Specifically: (### POST FEEDBACK NOTE - I misunderstood on first read, thank you for clarifying in your response.) I guess this could be considered a good thing for weak convergence, but then why even include this condition?