Adaptive Variance for Changing Sparse-Reward Environments
Lin, Xingyu, Guo, Pengsheng, Florensa, Carlos, Held, David
–arXiv.org Artificial Intelligence
Robots that are trained to perform a task in a fixed environment often fail when facing unexpected changes to the environment due to a lack of exploration. We propose a principled way to adapt the policy for better exploration in changing sparse-reward environments. Unlike previous works which explicitly model environmental changes, we analyze the relationship between the value function and the optimal exploration for a Gaussian-parameterized policy and show that our theory leads to an effective strategy for adjusting the variance of the policy, enabling fast adapt to changes in a variety of sparse-reward environments.
arXiv.org Artificial Intelligence
Mar-14-2019
- Country:
- North America > United States (0.46)
- Genre:
- Research Report (0.82)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.68)
- Reinforcement Learning (0.94)
- Statistical Learning (0.68)
- Robots (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence