Programmable Reinforcement Learning Agents
Andre, David, Russell, Stuart J.
–Neural Information Processing Systems
We present an expressive agent design language for reinforcement learning thatallows the user to constrain the policies considered by the learning process.Thelanguage includes standard features such as parameterized subroutines,temporary interrupts, aborts, and memory variables, but also allows for unspecified choices in the agent program. For learning that which isn't specified, we present provably convergent learning algorithms. Wedemonstrate by example that agent programs written in the language are concise as well as modular. This facilitates state abstraction and the transferability of learned skills. 1 Introduction The field of reinforcement learning has recently adopted the idea that the application of prior knowledge may allow much faster learning and may indeed be essential if realworld environmentsare to be addressed. For learning behaviors, the most obvious form of prior knowledge provides a partial description of desired behaviors. Several languages for partial descriptions have been proposed, including Hierarchical Abstract Machines (HAMs) [8], semi-Markov options [12], and the MAXQ framework [4]. This paper describes extensions to the HAM language that substantially increase its expressive power,using constructs borrowed from programming languages. Obviously, increasing expressivenessmakes it easier for the user to supply whatever prior knowledge is available, and to do so more concisely.
Neural Information Processing Systems
Dec-31-2001