Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model
Sidford, Aaron, Wang, Mengdi, Wu, Xian, Yang, Lin, Ye, Yinyu
–Neural Information Processing Systems
In this paper we consider the problem of computing an $\epsilon$-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in $O(1)$ time. We also extend our method to computing $\epsilon$-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound. Papers published at the Neural Information Processing Systems Conference.
Neural Information Processing Systems
Feb-14-2020, 16:26:10 GMT
- Technology: