Policy Gradient With Serial Markov Chain Reasoning
–Neural Information Processing Systems
We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We perform action selection by simulating the RMC for enough reasoning steps to approach its steady-state distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions.
Neural Information Processing Systems
Oct-10-2024, 16:41:16 GMT
- Technology: