Reduced Policy Optimization for Continuous Control with Hard Constraints Shutong Ding Jingya Wang 1 Ye Shi
–Neural Information Processing Systems
Recent advances in constrained reinforcement learning (RL) have endowed reinforcement learning with certain safety guarantees. However, deploying existing constrained RL algorithms in continuous control tasks with general hard constraints remains challenging, particularly in those situations with non-convex hard constraints. Inspired by the generalized reduced gradient (GRG) algorithm, a classical constrained optimization technique, we propose a reduced policy optimization (RPO) algorithm that combines RL with GRG to address general hard constraints.
Neural Information Processing Systems
Mar-27-2025, 06:26:55 GMT
- Country:
- North America > United States (0.28)
- Industry:
- Automobiles & Trucks (0.93)
- Energy
- Power Industry (1.00)
- Renewable (0.67)
- Transportation
- Electric Vehicle (0.67)
- Ground > Road (0.93)
- Technology: