Achieving Constant Regret in Linear Markov Decision Processes
–Neural Information Processing Systems
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspec-ified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to mis-specification level ζ . At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes.
Neural Information Processing Systems
Oct-10-2025, 20:29:34 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.28)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- North Carolina > Orange County
- Chapel Hill (0.04)
- California > Los Angeles County
- Europe > United Kingdom
- Genre:
- Research Report > Experimental Study (0.92)
- Industry:
- Health & Medicine (0.54)