Near-OptimalGoal-Oriented Reinforcement LearninginNon-StationaryEnvironments
–Neural Information Processing Systems
The different roles of c and P in this lower bound inspire us to design algorithms that estimate costs and transitions separately. Specifically, assuming the knowledge of c and P, we develop a simple but sub-optimal algorithm and another more involved minimax optimal algorithm (up to logarithmic terms). These algorithms combine the ideas of finite-horizon approximation [Chen et al., 2022a], special Bernstein-style bonuses of the MVP algorithm[Zhangetal.,2020],adaptiveconfidencewidening[WeiandLuo,2021],as well as some new techniques such as properly penalizing long-horizon policies. Finally,when c and P are unknown, we develop avariant ofthe MASTER algorithm [Weiand Luo,2021]and integrate the aforementioned ideas into itto achieve O(min{B?S
Neural Information Processing Systems
Feb-12-2026, 07:28:27 GMT