Efficient Model-free Reinforcement Learning in Metric Spaces
Model-free Reinforcement Learning (RL) algorithms such as Q-learning [Watkins, Dayan 92] have been widely used in practice and can achieve human level performance in applications such as video games [Mnih et al. 15]. Recently, equipped with the idea of optimism in the face of uncertainty, Q-learning algorithms [Jin, Allen-Zhu, Bubeck, Jordan 18] can be proven to be sample efficient for discrete tabular Markov Decision Processes (MDPs) which have finite number of states and actions. In this work, we present an efficient model-free Q-learning based algorithm in MDPs with a natural metric on the state-action space--hence extending efficient model-free Q-learning algorithms to continuous state-action space. Compared to previous model-based RL algorithms for metric spaces [Kakade, Kearns, Langford 03], our algorithm does not require access to a black-box planning oracle.
May-1-2019
- Country:
- Asia
- Afghanistan > Parwan Province
- Charikar (0.04)
- Middle East > Jordan (0.24)
- Afghanistan > Parwan Province
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia
- Genre:
- Research Report (0.40)
- Industry:
- Leisure & Entertainment (0.34)
- Transportation (0.34)