worst-case regret bound
Worst-Case Regret Bounds for Exploration via Randomized Value Functions
This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration.
Reviews: Worst-Case Regret Bounds for Exploration via Randomized Value Functions
Post author response: I thank the author(s) for their response and commenting on my discussion points. As those would need additional work, I for now keep my original score: this is a solid paper. While the proof for Lemma 4 & 5 is described very well in the main text, it would be helpful to have a short explanation how this is used to achieve Lemma 6. If necessary, I suggest to drop the proof of Lemma 3 from the main text as this result is standard. Quality: I have verified the proof in the main text and individual lemmas in the appendix.
Reviews: Worst-Case Regret Bounds for Exploration via Randomized Value Functions
The paper gives a frequentist regret bound for the RLSVI algorithm. While the bound is not minimax optimal (and potentially can be improved), this is the first frequentist guarantee for this algorithm and the proof contains some new technical insights, which may be useful in future work. Further the result demonstrates that other algorithmic strategies/paradigms (besides say optimism) may yield provably sample-efficient RL methods. Thanks for notifying us about a bug that you found in the proof! I discussed this with the reviewers and we all decided it was not a deal breaker for us.
Worst-Case Regret Bounds for Exploration via Randomized Value Functions
This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration.
Worst-Case Regret Bounds for Exploration via Randomized Value Functions
This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration. Papers published at the Neural Information Processing Systems Conference.