Provably Efficient Q-learning with Function Approximation via Distribution Shift Error Checking Oracle
Simon S. Du, Yuping Luo, Ruosong Wang, Hanrui Zhang
–Neural Information Processing Systems
Q-learning with function approximation is one of the most popular methods in reinforcement learning. Though the idea of using function approximation was proposed at least 60 years ago [27], even in the simplest setup, i.e, approximating Q-functions with linear functions, it is still an open problem how to design a provably efficient algorithm that learns a near-optimal policy. The key challenges are how to efficiently explore the state space and how to decide when to stop exploring in conjunction with the function approximation scheme. The current paper presents a provably efficient algorithm for Q-learning with linear function approximation. Under certain regularity assumptions, our algorithm, Difference Maximization Q-learning (DMQ), combined with linear function approximation, returns a near-optimal policy using polynomial number of trajectories. Our algorithm introduces a new notion, the Distribution Shift Error Checking (DSEC) oracle.
Neural Information Processing Systems
Jan-23-2025, 03:31:22 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America > United States (0.46)
- Europe > United Kingdom
- Industry:
- Leisure & Entertainment > Games (0.46)
- Technology: