Hierarchical model-based policy optimization: from actions to action sequences and back
–arXiv.org Artificial Intelligence
We develop a normative framework for hierarchical model-based policy optimization based on applying second-order methods in the space of all possible state-action paths. The resulting natural path gradient performs policy updates in a manner which is sensitive to the long-range correlational structure of the induced stationary state-action densities. We demonstrate that the natural path gradient can be computed exactly given an environment dynamics model and depends on expressions akin to higher-order successor representations. In simulation, we show that the priorization of local policy updates in the resulting policy flow indeed reflects the intuitive state-space hierarchy in several toy problems.
arXiv.org Artificial Intelligence
Nov-28-2019
- Country:
- Asia > Vietnam
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > Canada (0.04)
- Genre:
- Research Report (0.64)
- Technology: