PolyTask: Learning Unified Policies through Behavior Distillation

Haldar, Siddhant, Pinto, Lerrel

arXiv.org Artificial Intelligence 

Abstract-- Unified models capable of solving a wide variety of tasks have gained traction in vision and NLP due to their ability to share regularities and structures across tasks, which improves individual task performance and reduces computational footprint. However, the impact of such models remains limited in embodied learning problems, which present unique challenges due to interactivity, sample inefficiency, and sequential task presentation. In this work, we present PolyTask, a novel method for learning a single unified model that can solve various embodied tasks through a'learn then distill' mechanism. In the'learn' step, PolyTask leverages a few demonstrations for each task to train task-specific policies. Then, in the'distill' step, task-specific policies are distilled into a single policy using a new distillation method called Behavior Distillation. Given a unified policy, individual task behavior can be extracted through conditioning variables. PolyTask is designed to be conceptually simple while being able to leverage well-established algorithms in RL to enable interactivity, a handful of expert demonstrations to allow for sample efficiency, and preventing interactive access to tasks during distillation to enable lifelong learning. Distillation allows our unified I. Once trained, the unified policy can solve tasks tasks [1, 2, 3, 4]. In contrast to task-specific models, by conditioning on task identifiers such as goal image, text unified models are hypothesized to benefit from sharing data, description, or one-hot labels.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found