sar-ppo
024677efb8e4aee2eaeef17b54695bbe-Supplemental.pdf
In this section, we derive a lower bound for the trace of the covariance of the PG estimator in environments with stochastic dynamics. Let us assume that the initial policyπ(ai|si) follows the uniform distribution such thatπ(ai = 1|si) = π(ai = +1|si) = 12 for alli. Its optimal policy fort, πtθf(t|s), should producet x because otherwise it has the risk of ending up with ν reward, which is not an optimum. Since FiGAR-C is unaware of underlying state changes, its best strategy is to shorten the duration ofactions tobemoreresponsive. In VPG, we do not use any technique for variance reduction such asvalue functions and reward-to-go policygradient; hence, the formula for its gradient estimator is identical to Equation (3).
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods
Park, Seohong, Kim, Jaekyeom, Kim, Gunhee
In reinforcement learning, continuous time is often discretized by a time scale $\delta$, to which the resulting performance is known to be highly sensitive. In this work, we seek to find a $\delta$-invariant algorithm for policy gradient (PG) methods, which performs well regardless of the value of $\delta$. We first identify the underlying reasons that cause PG methods to fail as $\delta \to 0$, proving that the variance of the PG estimator can diverge to infinity in stochastic environments under a certain assumption of stochasticity. While durative actions or action repetition can be employed to have $\delta$-invariance, previous action repetition methods cannot immediately react to unexpected situations in stochastic environments. We thus propose a novel $\delta$-invariant method named Safe Action Repetition (SAR) applicable to any existing PG algorithm. SAR can handle the stochasticity of environments by adaptively reacting to changes in states during action repetition. We empirically show that our method is not only $\delta$-invariant but also robust to stochasticity, outperforming previous $\delta$-invariant approaches on eight MuJoCo environments with both deterministic and stochastic settings. Our code is available at https://vision.snu.ac.kr/projects/sar.
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > France (0.04)
- Asia > Vietnam > Long An Province (0.04)