Reviews: Learning to Predict Without Looking Ahead: World Models Without Forward Prediction

Neural Information Processing Systems 

Interesting work that explores whether world model be learned without using a forward-predictive loss, and providing a novel perspective on model-based reinforcement learning. Introducing a method of'observational dropout', the paper presents the first step towards demonstrating the feasibility of learning only the salient features needed for a task. The paper rebuttal has baseline comparisons to model based RL, which will be a valuable addition to the paper.