Review for NeurIPS paper: On Reward-Free Reinforcement Learning with Linear Function Approximation

Neural Information Processing Systems 

I would just like to confirm my understanding of the algorithmic contributions of this work. As far as I understand, Jin et al [2019] propose a learning algorithm for the standard RL case with linear function approximation in linear MDPs. Then Jin et al [2020] propose a method for efficient exploration in the reward-free RL case. This is for normal MDPs but in the tabular setting. In that work, exploration is achieved by constructing a reward function where the reward is 1 for states that are "significant", and 0 otherwise, and then solving the resulting task with an efficient learning algorithm.