submdp
Appendices A Proof for Theorem 1 Before proceeding, let us define an additional term S = null
Note that trivially, we have S KM . 's are generated under policy We now construct a high-probability confidence set. This lemma is based on Lemma 17 of Jaksch et al. [2010], which is based on the following From Theorem 2.1 in Weissman et al. [2003], for any null > 0, we have P {nullp () ˆ p()null On the other hand, based on the Hoeffding's inequality, if we choose "constant-shift" property of DP operator, and (c) follows from
Review for NeurIPS paper: On Efficiency in Hierarchical Reinforcement Learning
The justification relies on the Bayesian regret bounds to show that hierarchical decomposition can lead to statistically efficient learning by comparing the bounds for the "flat" MDP to the decomposed MDP, and thus deriving the following conditions for beneficial decomposition: either the subMDPs must all have a small state space or the original MDP is able to be decomposed into a small number of equivalent MDPs. The paper then goes on to discuss the computational complexity of planning with hierarchical structures with the Planning with Exit Profiles (PEP) algorithm. The authors derive a bound on the computational complexity of the PEP algorithm, which leads to the following properties being required for efficient learning: all subMDPs must be small, with a small number of exit profiles and total exit states. Finally, the paper also presents a bound on the performance of the PEP algorithm, and discusses the conditions for finding high-quality exit profiles, which is a requirement for the near-optimal performance of PEP.
Review for NeurIPS paper: On Efficiency in Hierarchical Reinforcement Learning
Quoting from the reviewers: R1: The paper presents a novel framework for analyzing potential efficiencies in reinforcement learning due to hierarchical structure in MDPs. This framework formally defines several useful concepts (subMDPs, equivalent subMDPs, exit states and exit profiles) that allow for an elegant refinement of regret bounds in a well-defined regime. The identification of particular properties (subMDPs, exit state set, and equivalence of subMDPs) provides a clear and useful framework for theoretical analysis of hierarchical reinforcement learning. Overall this paper provides an elegant, concrete framework for formalizing hierarchical structure and quantifying the efficiency such structure may allow. The paper provides a theoretical analysis of hierarchical reinforcement learning, deriving results on learning and planning efficiency when the reinforcement learning problem has repeated structure.
Demystifying Linear MDPs and Novel Dynamics Aggregation Framework
In this work, we prove that, in linear MDPs, the feature dimension $d$ is lower bounded by $S/U$ in order to aptly represent transition probabilities, where $S$ is the size of the state space and $U$ is the maximum size of directly reachable states. Hence, $d$ can still scale with $S$ depending on the direct reachability of the environment. To address this limitation of linear MDPs, we propose a novel structural aggregation framework based on dynamics, named as the "dynamics aggregation". For this newly proposed framework, we design a provably efficient hierarchical reinforcement learning algorithm in linear function approximation that leverages aggregated sub-structures. Our proposed algorithm exhibits statistical efficiency, achieving a regret of $ \tilde{O} ( d_{\psi}^{3/2} H^{3/2}\sqrt{ N T} )$, where $d_{\psi}$ represents the feature dimension of aggregated subMDPs and $N$ signifies the number of aggregated subMDPs. We establish that the condition $d_{\psi}^3 N \ll d^{3}$ is readily met in most real-world environments with hierarchical structures, enabling a substantial improvement in the regret bound compared to LSVI-UCB, which enjoys a regret of $ \tilde{O} (d^{3/2} H^{3/2} \sqrt{ T})$. To the best of our knowledge, this work presents the first HRL algorithm with linear function approximation that offers provable guarantees.
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)