Robust Reinforcement Learning
–Neural Information Processing Systems
KenjiDoya ATR International; CREST, JST 2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288 JAPAN doya@isd.atr.co.jp Abstract This paper proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors.The use of environmental models in RL is quite popular for both off-line learning by simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, often unwanted results. Based on the theory of H oocontrol, we consider a differential game in which a'disturbing' agent (disturber) tries to make the worst possible disturbance while a'control' agent (actor) tries to make the best control input. The problem is formulated as finding a minmax solutionof a value function that takes into account the norm of the output deviation and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference tothe value function.
Neural Information Processing Systems
Dec-31-2001
- Country:
- Asia > Japan
- Honshū > Kansai > Kyoto Prefecture > Kyoto (0.25)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Asia > Japan
- Industry:
- Education > Educational Setting (0.35)
- Technology: