Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones
Thananjeyan, Brijen, Balakrishna, Ashwin, Nair, Suraj, Luo, Michael, Srinivasan, Krishnan, Hwang, Minho, Gonzalez, Joseph E., Ibarz, Julian, Finn, Chelsea, Goldberg, Ken
–arXiv.org Artificial Intelligence
Abstract-- Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an imagebased navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2 - 80 times more Figure 1: Recovery RL can safely learn policies for contact-rich tasks efficiently in simulation domains and 3 times more efficiently from high-dimensional image observations in simulation experiments in physical experiments. We evaluate Recovery for videos and supplementary material. For example, consider an agent tasked with learning to extract a carton of milk from a fridge.
arXiv.org Artificial Intelligence
Oct-29-2020
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report (0.70)
- Industry:
- Education > Educational Setting (0.46)
- Technology: