SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic Local Planner and Polar State Representations
Nakhleh, Khaled, Raza, Minahil, Tang, Mack, Andrews, Matthew, Boney, Rinu, Hadzic, Ilija, Lee, Jeongran, Mohajeri, Atefeh, Palyutina, Karina
–arXiv.org Artificial Intelligence
We study the training performance of ROS local planners based on Reinforcement Learning (RL), and the trajectories they produce on real-world robots. We show that recent enhancements to the Soft Actor Critic (SAC) algorithm such as RAD and DrQ achieve almost perfect training after only 10000 episodes. We also observe that on real-world robots the resulting SACPlanner is more reactive to obstacles than traditional ROS local planners such as DWA.
arXiv.org Artificial Intelligence
Mar-21-2023
- Country:
- Asia > China
- Hong Kong (0.04)
- Europe
- Czechia > Prague (0.04)
- Finland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- Maryland (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- Texas (0.04)
- Asia > China
- Genre:
- Research Report (0.50)
- Industry:
- Leisure & Entertainment > Games
- Computer Games (0.46)
- Transportation (0.50)
- Leisure & Entertainment > Games
- Technology: