Goto

Collaborating Authors

 provably efficient q-learning



Provably Efficient Q-Learning with Low Switching Cost

Neural Information Processing Systems

We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization. This is motivated by the difficulty of running fully adaptive algorithms in real-world applications (such as medical domains), and we propose to quantify adaptivity using the notion of \emph{local switching cost}. Our main contribution, Q-Learning with UCB2 exploration, is a model-free algorithm for $H$-step episodic MDP that achieves sublinear regret whose local switching cost in $K$ episodes is $O(H^3SA\log K)$, and we provide a lower bound of $\Omega(HSA)$ on the local switching cost for any no-regret algorithm. Our algorithm can be naturally adapted to the concurrent setting \citep{guo2015concurrent}, which yields nontrivial results that improve upon prior work in certain aspects.


Provably Efficient Q-learning with Function Approximation via Distribution Shift Error Checking Oracle

Neural Information Processing Systems

Q-learning with function approximation is one of the most popular methods in reinforcement learning. Though the idea of using function approximation was proposed at least 60 years ago, even in the simplest setup, i.e, approximating Q-functions with linear functions, it is still an open problem how to design a provably efficient algorithm that learns a near-optimal policy. The key challenges are how to efficiently explore the state space and how to decide when to stop exploring in conjunction with the function approximation scheme. The current paper presents a provably efficient algorithm for Q-learning with linear function approximation. Under certain regularity assumptions, our algorithm, Difference Maximization Q-learning, combined with linear function approximation, returns a near-optimal policy using polynomial number of trajectories. Our algorithm introduces a new notion, the Distribution Shift Error Checking (DSEC) oracle. This oracle tests whether there exists a function in the function class that predicts well on a distribution $\mathcal{D}_1$, but predicts poorly on another distribution $\mathcal{D}_2$, where $\mathcal{D}_1$ and $\mathcal{D}_2$ are distributions over states induced by two different exploration policies. For the linear function class, this oracle is equivalent to solving a top eigenvalue problem. We believe our algorithmic insights, especially the DSEC oracle, are also useful in designing and analyzing reinforcement learning algorithms with general function approximation.



Provably Efficient Q-learning with Function Approximation via Distribution Shift Error Checking Oracle

Neural Information Processing Systems

Q-learning with function approximation is one of the most popular methods in reinforcement learning. Though the idea of using function approximation was proposed at least 60 years ago, even in the simplest setup, i.e, approximating Q-functions with linear functions, it is still an open problem how to design a provably efficient algorithm that learns a near-optimal policy. The key challenges are how to efficiently explore the state space and how to decide when to stop exploring in conjunction with the function approximation scheme. The current paper presents a provably efficient algorithm for Q-learning with linear function approximation. Under certain regularity assumptions, our algorithm, Difference Maximization Q-learning, combined with linear function approximation, returns a near-optimal policy using polynomial number of trajectories.


Reviews: Provably Efficient Q-Learning with Low Switching Cost

Neural Information Processing Systems

They also present (two flavours of) a Q-learning algorithm that achieve the regret matching the previous work however with the added benefit of having lower local switching cost.


Reviews: Provably Efficient Q-Learning with Low Switching Cost

Neural Information Processing Systems

On balance, the initial reviews for this paper were positive, with one slightly negative review. In discussion it was felt that the the authors did a reasonable job of addressing the concerns of the reviewers, though there was still some concern that the result may not be "surprising". I encourage the authors to incorporate their responses to the reviewers into any future version of the paper.



Reviews: Provably Efficient Q-learning with Function Approximation via Distribution Shift Error Checking Oracle

Neural Information Processing Systems

The paper proposes an adaptation of the classical Q-learning algorithm with linear function approximation that enjoys polynomial sample complexity. All reviewers feel the paper contains interesting contribution to the RL literature that should appear in this conference, and I therefore recommend acceptance.


Provably Efficient Q-Learning with Low Switching Cost

Neural Information Processing Systems

We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization. This is motivated by the difficulty of running fully adaptive algorithms in real-world applications (such as medical domains), and we propose to quantify adaptivity using the notion of \emph{local switching cost}. Our main contribution, Q-Learning with UCB2 exploration, is a model-free algorithm for H -step episodic MDP that achieves sublinear regret whose local switching cost in K episodes is O(H 3SA\log K), and we provide a lower bound of \Omega(HSA) on the local switching cost for any no-regret algorithm. Our algorithm can be naturally adapted to the concurrent setting \citep{guo2015concurrent}, which yields nontrivial results that improve upon prior work in certain aspects.