A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning
Chen, Zixiang, Li, Chris Junchi, Yuan, Angela, Gu, Quanquan, Jordan, Michael I.
–arXiv.org Artificial Intelligence
Reinforcement learning (RL) is a decision-making process that seeks to maximize the expected reward when an agent interacts with the environment [Sutton and Barto, 2018]. Over the past decade, RL has gained increasing attention due to its successes in a wide range of domains, including Atari games [Mnih et al., 2013], Go game [Silver et al., 2016], autonomous driving [Yurtsever et al., 2020], Robotics [Kober et al., 2013], etc. Existing RL algorithms can be categorized into value-based algorithms such as Q-learning [Watkins, 1989] and policy-based algorithms such as policy gradient [Sutton et al., 1999]. They can also be categorized as a model-free approach where one directly models the value function classes, or alternatively, a model-based approach where one needs to estimate the transition probability. Due to the intractably large state and action spaces that are used to model the real-world complex environment, function approximation in RL has become prominent in both algorithm design and theoretical analysis. It is a pressing challenge to design sample-efficient RL algorithms with general function approximations. In the special case where the underlying Markov Decision Processes (MDPs) enjoy certain linear structures, several lines of works have achieved polynomial sample complexity and/or T regret guarantees under either model-free or model-based RL settings. For linear MDPs where the transition probability and the reward function admit linear structure, Yang and Wang [2019] developed a variant of Q-learning when granted access to a generative model, Jin et al. [2020] proposed an LSVI-UCB algorithm with a Õ( d
arXiv.org Artificial Intelligence
Sep-30-2022
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report (0.81)
- Industry:
- Leisure & Entertainment > Games
- Computer Games (0.54)
- Go (0.34)
- Leisure & Entertainment > Games