Reinforcement Learning in POMDP's via Direct Gradient Ascent
Baxter, Jonathan, Bartlett, Peter L.
–arXiv.org Artificial Intelligence
This paper discusses theoretical and experimental aspects of gradient-based approaches to the direct optimization of policy performance in controlled POMDPs. We introduce GPOMDP, a REINFORCE-like algorithm for estimating an approximation to the gradient of the average reward as a function of the parameters of a stochastic policy. The algorithm's chief advantages are that it requires only a single sample path of the underlying Markov chain, it uses only one free parameter $β\in [0,1)$, which has a natural interpretation in terms of bias-variance trade-off, and it requires no knowledge of the underlying state. We prove convergence of GPOMDP and show how the gradient estimates produced by GPOMDP can be used in a conjugate-gradient procedure to find local optima of the average reward.
arXiv.org Artificial Intelligence
Dec-8-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California > San Diego County
- San Diego (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > San Diego County
- Oceania > Australia
- Australian Capital Territory > Canberra (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games (0.46)
- Technology: