Q-learning with Nearest Neighbors
–Neural Information Processing Systems
We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with covering time'' $L$, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\Ot(L/(\varepsilon 3(1-\gamma) 7))$ samples. Indeed, we establish a lower bound that argues that the dependence of $ \Omegat(1/\varepsilon {d 2})$ is necessary.
Neural Information Processing Systems
Feb-14-2020, 12:11:01 GMT
- Technology: