Gaussian Processes in Reinforcement Learning
Kuss, Malte, Rasmussen, Carl E.
–Neural Information Processing Systems
We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time.We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributionsof functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.
Neural Information Processing Systems
Dec-31-2004
- Country:
- Europe > Germany
- Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Germany
- Technology: