Plotting

 Garg, Animesh


Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning

arXiv.org Artificial Intelligence

Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning De-An Huang 1, Danfei Xu 1, Y uke Zhu 1, Animesh Garg 1, 2, Silvio Savarese 1, Li Fei-Fei 1, Juan Carlos Niebles 1 Abstract -- We address one-shot imitation learning, where the goal is to execute a previously unseen task based on a single demonstration. While there has been exciting progress in this direction, most of the approaches still require a few hundred tasks for meta-training, which limits the scalability of the approaches. Our main contribution is to formulate one-shot imitation learning as a symbolic planning problem along with the symbol grounding problem. This formulation disentangles the policy execution from the inter-task generalization and leads to better data efficiency. The key technical challenge is that the symbol grounding is prone to error with limited training data and leads to subsequent symbolic planning failures. We address this challenge by proposing a continuous relaxation of the discrete symbolic planner that directly plans on the probabilistic outputs of the symbol grounding model. Our continuous relaxation of the planner can still leverage the information contained in the probabilistic symbol grounding and significantly improve over the baseline planner for the one-shot imitation learning tasks without using large training data. I NTRODUCTION We are interested in robots that can learn a wide variety of tasks efficiently. Recently, there has been an increasing interest in the one-shot imitation learning problem [1-7], where the goal is to learn to execute a previously unseen task from only a single demonstration of the task. This setting is also referred to as meta-learning [3, 8], where the meta-training stage uses a set of tasks in a given domain to simulate the one-shot testing scenario. This allows the learned model to generalize to previously unseen tasks with a single demonstration in the meta-testing stage. The main shortcoming of these one-shot approaches is that they typically require a large amount of data for meta-training (400 meta-training tasks in [4] and 1000 in [6] for the Block Stacking task [6]) to generalize reliably to unseen tasks.


Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

arXiv.org Artificial Intelligence

Reinforcement Learning (RL) of contact-rich manipulation tasks has yielded impressive results in recent years. While many studies in RL focus on varying the observation space or reward model, few efforts focused on the choice of action space (e.g. joint or end-effector space, position, velocity, etc.). However, studies in robot motion control indicate that choosing an action space that conforms to the characteristics of the task can simplify exploration and improve robustness to disturbances. This paper studies the effect of different action spaces in deep RL and advocates for Variable Impedance Control in End-effector Space (VICES) as an advantageous action space for constrained and contact-rich tasks. We evaluate multiple action spaces on three prototypical manipulation tasks: Path Following (task with no contact), Door Opening (task with kinematic constraints), and Surface Wiping (task with continuous contact). We show that VICES improves sample efficiency, maintains low energy consumption, and ensures safety across all three experimental setups. Further, RL policies learned with VICES can transfer across different robot models in simulation, and from simulation to real for the same robot. Further information is available at https://stanfordvl.github.io/vices.


RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation

arXiv.org Artificial Intelligence

Imitation Learning has empowered recent advances in learning robotic manipulation tasks by addressing shortcomings of Reinforcement Learning such as exploration and reward specification. However, research in this area has been limited to modest-sized datasets due to the difficulty of collecting large quantities of task demonstrations through existing mechanisms. This work introduces RoboTurk to address this challenge. RoboTurk is a crowdsourcing platform for high quality 6-DoF trajectory based teleoperation through the use of widely available mobile devices (e.g. iPhone). We evaluate RoboTurk on three manipulation tasks of varying timescales (15-120s) and observe that our user interface is statistically similar to special purpose hardware such as virtual reality controllers in terms of task completion times. Furthermore, we observe that poor network conditions, such as low bandwidth and high delay links, do not substantially affect the remote users' ability to perform task demonstrations successfully on RoboTurk. Lastly, we demonstrate the efficacy of RoboTurk through the collection of a pilot dataset; using RoboTurk, we collected 137.5 hours of manipulation data from remote workers, amounting to over 2200 successful task demonstrations in 22 hours of total system usage. We show that the data obtained through RoboTurk enables policy learning on multi-step manipulation tasks with sparse rewards and that using larger quantities of demonstrations during policy learning provides benefits in terms of both learning consistency and final performance. For additional results, videos, and to download our pilot dataset, visit $\href{http://roboturk.stanford.edu/}{\texttt{roboturk.stanford.edu}}$


Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks

arXiv.org Artificial Intelligence

Abstract-- Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is nontrivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented. Even in routine tasks such as putting a car key in the ignition, humans effortlessly combine our senses of vision and touch to complete the task. Visual feedback provides information about semantic and geometric object properties for accurate reaching or grasp pre-shaping. Haptic feedback provides information about the current contact conditions between object and environment for accurate localization and control even under occlusions.


Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

arXiv.org Artificial Intelligence

Our goal is for a robot to execute a previously unseen task based on a single video demonstration of the task. The success of our approach relies on the principle of transferring knowledge from seen tasks to unseen ones with similar semantics. More importantly, we hypothesize that to successfully execute a complex task from a single video demonstration, it is necessary to explicitly incorporate compositionality to the model. To test our hypothesis, we propose Neural Task Graph (NTG) Networks, which use task graph as the intermediate representation to modularize the representations of both the video demonstration and the derived policy. We show this formulation achieves strong inter-task generalization on two complex tasks: Block Stacking in BulletPhysics and Object Collection in AI2-THOR. We further show that the same principle is applicable to real-world videos. We show that NTG can improve data efficiency of few-shot activity understanding in the Breakfast Dataset.


Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

arXiv.org Machine Learning

Tool manipulation is vital for facilitating robots to complete challenging task goals. It requires reasoning about the desired effect of the task and thus properly grasping and manipulating the tool to achieve the task. Task-agnostic grasping optimizes for grasp robustness while ignoring crucial task-specific constraints. In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool. The training process of the model is based on large-scale simulated self-supervision with procedurally generated tool objects. We perform both simulated and real-world experiments on two tool-based manipulation tasks: sweeping and hammering. Our model achieves overall 71.1% task success rate for sweeping and 80.0% task success rate for hammering. Supplementary material is available at: bit.ly/task-oriented-grasp