prioritized sweeping
Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time perfor(cid:173) mance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize impor(cid:173) tant dynamic programming sweeps and to guide the exploration of state(cid:173) space.
Planning by Prioritized Sweeping with Small Backups
van Seijen, Harm, Sutton, Richard S.
Efficient planning plays a crucial role in model-based reinforcement learning. Traditionally, the main planning operation is a full backup based on the current estimates of the successor states. Consequently, its computation time is proportional to the number of successor states. In this paper, we introduce a new planning backup that uses only the current value of a single successor state and has a computation time independent of the number of successor states. This new backup, which we call a small backup, opens the door to a new class of model-based reinforcement learning methods that exhibit much finer control over their planning process than traditional methods. We empirically demonstrate that this increased flexibility allows for more efficient planning by showing that an implementation of prioritized sweeping based on small backups achieves a substantial performance improvement over classical implementations.
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Moore, Andrew W., Atkeson, Christopher G.
We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time performance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of statespace. We compare Prioritized Sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real time problems with which other methods have difficulty.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Moore, Andrew W., Atkeson, Christopher G.
We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time performance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of statespace. We compare Prioritized Sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real time problems with which other methods have difficulty.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Moore, Andrew W., Atkeson, Christopher G.
We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time performance. Classicalmethods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamicprogramming sweeps and to guide the exploration of statespace.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)