Goto

Collaborating Authors

Reinforcement Learning: An Introduction

#artificialintelligence

In 9 hours, Google's AlphaZero went from only knowing the rules of chess to beating the best models in the world. Chess has been studied by humans for over 1000 years, yet a reinforcement learning model was able to further our knowledge of the game in a negligible amount of time, using no prior knowledge aside from the game rules. No other machine learning field allows for such progress in this problem. Today, similar models by Google are being used in a wide variety of fields like predicting and detecting early signs of life-changing illnesses, improving text-to-speech systems, and more. Machine learning can be divided into 3 main paradigms.


Dissecting Reinforcement Learning-Part.3

#artificialintelligence

Welcome to the third part of the series "Disecting Reinforcement Learning". In the first and second post we dissected dynamic programming and Monte Carlo (MC) methods. The third group of techniques in reinforcement learning is called Temporal Differencing (TD) methods. TD learning solves some of the problem arising in MC learning. In the conclusions of the second part I described one of this problem. Using MC methods it is necessary to wait until the end of the episode before updating the utility function. This is a serious problem because some applications can have very long episodes and delaying learning until the end is too slow. Moreover the termination of the episode is not always guaranteed. We will see how TD methods solve these issues.



A Framework for Reinforcement Learning and Planning

arXiv.org Artificial Intelligence

Sequential decision making, commonly formalized as Markov Decision Process optimization, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are planning and reinforcement learning. Both research fields largely have their own research communities. However, if both research fields solve the same problem, then we should be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying framework for reinforcement learning and planning (FRAP), which identifies the underlying dimensions on which any planning or learning algorithm has to decide. At the end of the paper, we compare - in a single table - a variety of well-known planning, model-free and model-based RL algorithms along the dimensions of our framework, illustrating the validity of the framework. Altogether, FRAP provides deeper insight into the algorithmic space of planning and reinforcement learning, and also suggests new approaches to integration of both fields.


Dyna-T: Dyna-Q and Upper Confidence Bounds Applied to Trees

arXiv.org Artificial Intelligence

In this work we present a preliminary investigation of a novel algorithm called Dyna-T. In reinforcement learning (RL) a planning agent has its own representation of the environment as a model. To discover an optimal policy to interact with the environment, the agent collects experience in a trial and error fashion. Experience can be used for learning a better model or improve directly the value function and policy. Typically separated, Dyna-Q is an hybrid approach which, at each iteration, exploits the real experience to update the model as well as the value function, while planning its action using simulated data from its model. However, the planning process is computationally expensive and strongly depends on the dimensionality of the state-action space. We propose to build a Upper Confidence Tree (UCT) on the simulated experience and search for the best action to be selected during the on-line learning process. We prove the effectiveness of our proposed method on a set of preliminary tests on three testbed environments from Open AI. In contrast to Dyna-Q, Dyna-T outperforms state-of-the-art RL agents in the stochastic environments by choosing a more robust action selection strategy.