Goto

Collaborating Authors

 Jonsson, Anders


Hierarchical Width-Based Planning and Learning

arXiv.org Artificial Intelligence

Width-based search methods have demonstrated state-of-the-art performance in a wide range of testbeds, from classical planning problems to image-based simulators such as Atari games. These methods scale independently of the size of the state-space, but exponentially in the problem width. In practice, running the algorithm with a width larger than 1 is computationally intractable, prohibiting IW from solving higher width problems. In this paper, we present a hierarchical algorithm that plans at two levels of abstraction. A high-level planner uses abstract features that are incrementally discovered from low-level pruning decisions. We illustrate this algorithm in classical planning PDDL domains as well as in pixel-based simulator domains. In classical planning, we show how IW(1) at two levels of abstraction can solve problems of width 2. For pixel-based domains, we show how in combination with a learned policy and a learned value function, the proposed hierarchical IW can outperform current flat IW-based planners in Atari games with sparse rewards.


Hierarchical reinforcement learning for efficient exploration and transfer

arXiv.org Artificial Intelligence

Sparse-reward domains are challenging for reinforcement learning algorithms since significant exploration is needed before encountering reward for the first time. Hierarchical reinforcement learning can facilitate exploration by reducing the number of decisions necessary before obtaining a reward. In this paper, we present a novel hierarchical reinforcement learning framework based on the compression of an invariant state space that is common to a range of tasks. The algorithm introduces subtasks which consist of moving between the state partitions induced by the compression. Results indicate that the algorithm can successfully solve complex sparse-reward domains, and transfer knowledge to solve new, previously unseen tasks more quickly.


Fast active learning for pure exploration in reinforcement learning

arXiv.org Machine Learning

Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common rule of thumb for exploration bonuses is to use $1/\sqrt{n}$ bonus that is added to the empirical estimates of the reward, where $n$ is a number of times this particular state (or a state-action pair) was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with $1/n$ bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.


Adaptive Reward-Free Exploration

arXiv.org Machine Learning

Reward-free exploration is a reinforcement learning setting studied by Jin et al. (2020), who address it by running several algorithms with regret guarantees in parallel. In our work, we instead give a more natural adaptive approach for reward-free exploration which directly reduces upper bounds on the maximum MDP estimation error. We show that, interestingly, our reward-free UCRL algorithm can be seen as a variant of an algorithm of Fiechter from 1994, originally proposed for a different objective that we call best-policy identification. We prove that RF-UCRL needs of order $({SAH^4}/{\varepsilon^2})(\log(1/\delta) + S)$ episodes to output, with probability $1-\delta$, an $\varepsilon$-approximation of the optimal policy for any reward function. This bound improves over existing sample-complexity bounds in both the small $\varepsilon$ and the small $\delta$ regimes. We further investigate the relative complexities of reward-free exploration and best-policy identification.


Improved Exploration in Factored Average-Reward MDPs

arXiv.org Machine Learning

In reinforcement learning (RL), an agent repeatedly interacts with an unknown environment in order to maximize its cumulative reward. A typical model of the environment is a Markov decision process (MDP): in each decision epoch, the agent observes a state, takes an action and receives a reward before transiting to the next state. To achieve its objective, the agent has to estimate the parameters of the MDP from experience and learn a policy that maps states to actions. While doing so, the agent faces a choice between two basic strategies: exploration, i.e. discovering the effects of actions on the environment, and exploitation, i.e. using its current knowledge to maximize reward in the short term. Most of model-based RL algorithms treat the state as a black box.


Induction and Exploitation of Subgoal Automata for Reinforcement Learning

arXiv.org Artificial Intelligence

In this paper we present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks. ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals expressed as propositional logic formulas over a set of high-level events. A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding. A state-of-the-art inductive logic programming system is used to learn a subgoal automaton that covers the traces of high-level events observed by the RL agent. When the currently exploited automaton does not correctly recognize a trace, the automaton learner induces a new automaton that covers that trace. The interleaving process guarantees the induction of automata with the minimum number of states, and applies a symmetry breaking mechanism to shrink the search space whilst remaining complete. We evaluate ISA in several grid-world and continuous state space problems using different RL algorithms that leverage the automaton structures. We provide an in-depth empirical analysis of the automaton learning process performance in terms of the traces, the symmetric breaking and specific restrictions imposed on the final learnable automaton. For each class of RL problem, we show that the learned automata can be successfully exploited to learn policies that reach the goal, achieving an average reward comparable to the case where automata are not learned but handcrafted and given beforehand.


Planning in Markov Decision Processes with Gap-Dependent Sample Complexity

arXiv.org Machine Learning

We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of calls to the generative models needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent sample complexity result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.


Lifelong Control of Off-grid Microgrid with Model Based Reinforcement Learning

arXiv.org Artificial Intelligence

The lifelong control problem of an off-grid microgrid Centralized microgrid control is usually decomposed in four is composed of two tasks, namely estimation tasks: i) estimating the parameters of the microgrid devices of the condition of the microgrid devices and operational (for instance the charge efficiency of a battery storage device planning accounting for the uncertainties as a function of the state of charge and temperature, or the by forecasting the future consumption and actual capacity of a battery after a number of cycles), ii) the renewable production. The main challenge forecasting the consumption and the renewable production, for the effective control arises from the various iii) operational planning to anticipate weather effects and changes that take place over time. In this paper, human activities, and iv) real-time control to adapt planned we present an open-source reinforcement framework decisions to the current situation. These tasks are preformed for the modeling of an off-grid microgrid sequentially during the lifetime of a microgrid in order to for rural electrification. The lifelong control problem achieve near optimal operation and to maximize the benefits of an isolated microgrid is formulated as a arising from distributed generation.


Induction of Subgoal Automata for Reinforcement Learning

arXiv.org Machine Learning

Our method relies on inducing an automaton whose transitions are subgoals expressed as propositional formulas over a set of observable events. A state-of-the-art inductive logic programming system is used to learn the automaton from observation traces perceived by the RL agent. The reinforcement learning and automaton learning processes are interleaved: a new refined automaton is learned whenever the RL agent generates a trace not recognized by the current automaton. We evaluate ISA in several gridworld problems and show that it performs similarly to a method for which automata are given in advance. We also show that the learned automata can be exploited to speed up convergence through reward shaping and transfer learning across multiple tasks. Finally, we analyze the running time and the number of traces that ISA needs to learn an automata, and the impact that the number of observable events has on the learner's performance.


Generalized Planning With Procedural Domain Control Knowledge

arXiv.org Artificial Intelligence

Generalized planning is the task of generating a single solution that is valid for a set of planning problems. In this paper we show how to represent and compute generalized plans using procedural Domain Control Knowledge (DCK). We define a {\it divide and conquer} approach that first generates the procedural DCK solving a set of planning problems representative of certain subtasks and then compile it as callable procedures of the overall generalized planning problem. Our procedure calling mechanism allows nested and recursive procedure calls and is implemented in PDDL so that classical planners can compute and exploit procedural DCK. Experiments show that an off-the-shelf classical planner, using procedural DCK as callable procedures, can compute generalized plans in a wide range of domains including non-trivial ones, such as sorting variable-size lists or DFS traversal of binary trees with variable size.