Transfer of Deep Reactive Policies for MDP Planning
Bajpai, Aniket (Nick), Garg, Sankalp, Mausam,
–Neural Information Processing Systems
Domain-independent probabilistic planners input an MDP description in a factored representation language such as PPDDL or RDDL, and exploit the specifics of the representation for faster planning. Traditional algorithms operate on each problem instance independently, and good methods for transferring experience from policies of other instances of a domain to a new instance do not exist. Recently, researchers have begun exploring the use of deep reactive policies, trained via deep reinforcement learning (RL), for MDP planning domains. One advantage of deep reactive policies is that they are more amenable to transfer learning. In this paper, we present the first domain-independent transfer algorithm for MDP planning domains expressed in an RDDL representation.
Neural Information Processing Systems
Feb-14-2020, 21:25:40 GMT
- Technology: