Harnessing Bounded-Support Evolution Strategies for Policy Refinement
Hirschowitz, Ethan, Ramos, Fabio
–arXiv.org Artificial Intelligence
Improving competent robot policies with on-policy RL is often hampered by noisy, low-signal gradients. We revisit Evolution Strategies (ES) as a policy-gradient proxy and localize exploration with bounded, antithetic triangular perturbations, suitable for policy refinement. We propose Triangular-Distribution ES (TD-ES) which pairs bounded triangular noise with a centered-rank finite-difference estimator to deliver stable, parallelizable, gradient-free updates. In a two-stage pipeline -- PPO pretraining followed by TD-ES refinement -- this preserves early sample efficiency while enabling robust late-stage gains. Across a suite of robotic manipulation tasks, TD-ES raises success rates by 26.5% relative to PPO and greatly reduces variance, offering a simple, compute-light path to reliable refinement.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Hawaii > Honolulu County > Honolulu (0.04)
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: