dirpg
Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
Direct optimization (McAllester et al., 2010; Song et al., 2016) is an appealing framework that replaces integration with optimization of a random objective for approximating gradients in models with discrete random variables (Lorberbom et al., 2018). A* sampling (Maddison et al., 2014) is a framework for optimizing such random objectives over large spaces. We show how to combine these techniques to yield a reinforcement learning algorithm that approximates a policy gradient by finding trajectories that optimize a random objective. We call the resulting algorithms \emph{direct policy gradient} (DirPG) algorithms. A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient. We further analyze their properties, showing there are cases where DirPG has an exponentially larger probability of sampling informative gradients compared to REINFORCE. We also show that there is a built-in variance reduction technique and that a parameter that was previously viewed as a numerical approximation can be interpreted as controlling risk sensitivity. Empirically, we evaluate the effect of key degrees of freedom and show that the algorithm performs well in illustrative domains compared to baselines.
start with common concerns and then respond to individual reviewer comments as space permits: 2 Common: There should be a baseline using MCTS and assuming access to simulator / common random numbers
Thank you for the thoughtful and careful reviews. We hope the AC nominates some of you for reviewer awards. There should be a baseline using MCTS and assuming access to simulator / common random numbers. There appears to be some imprecision in reviews about what this means. Then environment stochasticity is re-sampled and the algorithm repeats.
Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
Many problems in machine learning reduce to learning a probability distribution (or policy) over sequences of discrete actions so as to maximize a downstream utility function. Examples include generating text sequences to maximize a task-specific metric like BLEU and generating action sequences in reinforcement learning (RL) to maximize expected return.
- North America > United States > Maryland (0.04)
- North America > Canada (0.04)
- Europe > Spain > Canary Islands (0.04)
- Asia > Middle East > Israel (0.04)
Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
Many problems in machine learning reduce to learning a probability distribution (or policy) over sequences of discrete actions so as to maximize a downstream utility function. Examples include generating text sequences to maximize a task-specific metric like BLEU and generating action sequences in reinforcement learning (RL) to maximize expected return.
- North America > United States > Maryland (0.04)
- North America > Canada (0.04)
- Europe > Spain > Canary Islands (0.04)
- Asia > Middle East > Israel (0.04)
start with common concerns and then respond to individual reviewer comments as space permits: 2 Common: There should be a baseline using MCTS and assuming access to simulator / common random numbers
Thank you for the thoughtful and careful reviews. We hope the AC nominates some of you for reviewer awards. There should be a baseline using MCTS and assuming access to simulator / common random numbers. There appears to be some imprecision in reviews about what this means. Then environment stochasticity is re-sampled and the algorithm repeats.
Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
Direct optimization (McAllester et al., 2010; Song et al., 2016) is an appealing framework that replaces integration with optimization of a random objective for approximating gradients in models with discrete random variables (Lorberbom et al., 2018). A* sampling (Maddison et al., 2014) is a framework for optimizing such random objectives over large spaces. We show how to combine these techniques to yield a reinforcement learning algorithm that approximates a policy gradient by finding trajectories that optimize a random objective. We call the resulting algorithms \emph{direct policy gradient} (DirPG) algorithms. A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.
Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
Direct optimization (McAllester et al., 2010; Song et al., 2016) is an appealing framework that replaces integration with optimization of a random objective for approximating gradients in models with discrete random variables (Lorberbom et al., 2018). A* sampling (Maddison et al., 2014) is a framework for optimizing such random objectives over large spaces. We show how to combine these techniques to yield a reinforcement learning algorithm that approximates a policy gradient by finding trajectories that optimize a random objective. We call the resulting algorithms \emph{direct policy gradient} (DirPG) algorithms. A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.