Sukhatme, Gaurav S.
Learning Deformable Object Manipulation from Expert Demonstrations
Salhotra, Gautam, Liu, I-Chun Arthur, Dominguez-Kuhne, Marcus, Sukhatme, Gaurav S.
We present a novel Learning from Demonstration (LfD) method, Deformable Manipulation from Demonstrations (DMfD), to solve deformable manipulation tasks using states or images as inputs, given expert demonstrations. Our method uses demonstrations in three different ways, and balances the trade-off between exploring the environment online and using guidance from experts to explore high dimensional spaces effectively. We test DMfD on a set of representative manipulation tasks for a 1-dimensional rope and a 2-dimensional cloth from the SoftGym suite of tasks, each with state and image observations. Our method exceeds baseline performance by up to 12.9% for state-based tasks and up to 33.44% on image-based tasks, with comparable or better robustness to randomness. Additionally, we create two challenging environments for folding a 2D cloth using image-based observations, and set a performance benchmark for them. We deploy DMfD on a real robot with a minimal loss in normalized performance during real-world execution compared to simulation (~6%). Source code is on github.com/uscresl/dmfd
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation
Liu, I-Chun Arthur, Uppal, Shagun, Sukhatme, Gaurav S., Lim, Joseph J., Englert, Peter, Lee, Youngwoon
Solving complex manipulation tasks in obstructed environments is a challenging problem in deep reinforcement learning (RL) since it requires precise object interactions as well as collision-free movement across obstacles. To tackle this problem, prior works [1-3] have proposed to combine the strengths of motion planning (MP) and RL - safe collision-free maneuvers of MP and sophisticated contact-rich interactions of RL, demonstrating promising results. However, MP requires access to the geometric state of an environment for collision checking, which is often not available in the real world, and is also computationally expensive for a real-time control. To deploy such agents in realistic settings, we need to resolve the dependency on the state information and costly computation of MP, such that the agent can perform a task in the visual domain. To this end, we propose a two-step distillation framework, motion planner augmented policy distillation (MoPA-PD), that transfers the state-based motion planner augmented RL policy (MoPA-RL [1]) into a visual control policy, thereby removing the motion planner and the dependency on the state information. Concretely, our framework consists of two stages: (1) visual behavioral cloning (BC [4]) with trajectories collected using the MoPA-RL policy and (2) vision-based RL training with the guidance of smoothed trajectories from the BC policy. The first step, visual BC, removes the dependency on the motion planner and the resulting visual BC policy generates smoother behaviors compared to the motion planner's jittery behaviors.
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Zhao, Yizhou, Lin, Kaixiang, Jia, Zhiwei, Gao, Qiaozi, Thattai, Govind, Thomason, Jesse, Sukhatme, Gaurav S.
Learning-based methods for training embodied agents typically require a large number of high-quality scenes that contain realistic layouts and support meaningful interactions. However, current simulators for Embodied AI (EAI) challenges only provide simulated indoor scenes with a limited number of layouts. This paper presents Luminous, the first research framework that employs state-of-the-art indoor scene synthesis algorithms to generate large-scale simulated scenes for Embodied AI challenges. Further, we automatically and quantitatively evaluate the quality of generated indoor scenes via their ability to support complex household tasks. Luminous incorporates a novel scene generation algorithm (Constrained Stochastic Scene Generation (CSSG)), which achieves competitive performance with human-designed scenes. Within Luminous, the EAI task executor, task instruction generation module, and video rendering toolkit can collectively generate a massive multimodal dataset of new scenes for the training and evaluation of Embodied AI agents. Extensive experimental results demonstrate the effectiveness of the data generated by Luminous, enabling the comprehensive assessment of embodied agents on generalization and robustness.
Adaptive Sampling using POMDPs with Domain-Specific Considerations
Salhotra, Gautam, Denniston, Christopher E., Caron, David A., Sukhatme, Gaurav S.
We investigate improving Monte Carlo Tree Search based solvers for Partially Observable Markov Decision Processes (POMDPs), when applied to adaptive sampling problems. We propose improvements in rollout allocation, the action exploration algorithm, and plan commitment. The first allocates a different number of rollouts depending on how many actions the agent has taken in an episode. We find that rollouts are more valuable after some initial information is gained about the environment. Thus, a linear increase in the number of rollouts, i.e. allocating a fixed number at each step, is not appropriate for adaptive sampling tasks. The second alters which actions the agent chooses to explore when building the planning tree. We find that by using knowledge of the number of rollouts allocated, the agent can more effectively choose actions to explore. The third improvement is in determining how many actions the agent should take from one plan. Typically, an agent will plan to take the first action from the planning tree and then call the planner again from the new state. Using statistical techniques, we show that it is possible to greatly reduce the number of rollouts by increasing the number of actions taken from a single planning tree without affecting the agent's final reward. Finally, we demonstrate experimentally, on simulated and real aquatic data from an underwater robot, that these improvements can be combined, leading to better adaptive sampling. The code for this work is available at https://github.com/uscresl/AdaptiveSamplingPOMCP
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments
Yamada, Jun, Lee, Youngwoon, Salhotra, Gautam, Pertsch, Karl, Pflueger, Max, Sukhatme, Gaurav S., Lim, Joseph J., Englert, Peter
Deep reinforcement learning (RL) agents are able to learn contact-rich manipulation tasks by maximizing a reward signal, but require large amounts of experience, especially in environments with many obstacles that complicate exploration. In contrast, motion planners use explicit models of the agent and environment to plan collision-free paths to faraway goals, but suffer from inaccurate models in tasks that require contacts with the environment. To combine the benefits of both approaches, we propose motion planner augmented RL (MoPA-RL) which augments the action space of an RL agent with the long-horizon planning capabilities of motion planners. Based on the magnitude of the action, our approach smoothly transitions between directly executing the action and invoking a motion planner. We evaluate our approach on various simulated manipulation tasks and compare it to alternative action spaces in terms of learning efficiency and safety. The experiments demonstrate that MoPA-RL increases learning efficiency, leads to a faster exploration, and results in safer policies that avoid collisions with the environment. Videos and code are available at https://clvrai.com/mopa-rl .
Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning
Julian, Ryan, Swanson, Benjamin, Sukhatme, Gaurav S., Levine, Sergey, Finn, Chelsea, Hausman, Karol
One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments. Despite this potential, most of the robot learning systems today are deployed as a fixed policy and they are not being adapted after their deployment. Can we efficiently adapt previously learned behaviors to new environments, objects and percepts in the real world? In this paper, we present a method and empirical evidence towards a robot learning framework that facilitates continuous adaption. In particular, we demonstrate how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning, including changes in background, object shape and appearance, lighting conditions, and robot morphology. Further, this adaptation uses less than 0.2% of the data necessary to learn the task from scratch. We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning, and that pre-training via RL is essential: training from scratch or adapting from supervised ImageNet features are both unsuccessful with such small amounts of data. We also find that these positive results hold in a limited continual learning setting, in which we repeatedly fine-tune a single lineage of policies using data from a succession of new tasks. Our empirical conclusions are consistently supported by experiments on simulated manipulation tasks, and by 52 unique fine-tuning experiments on a real robotic grasping system pre-trained on 580,000 grasps.
Resilient Coverage: Exploring the Local-to-Global Trade-off
Ramachandran, Ragesh K., Zhou, Lifeng, Sukhatme, Gaurav S.
Resilient Coverage: Exploring the Local-to-Global Tradeoff Ragesh K. Ramachandran 1, Lifeng Zhou 2 and Gaurav S. Sukhatme 1 Abstract -- We propose a centralized control framework to select suitable robots from a heterogeneous pool and place them at appropriate locations to monitor a region for events of interest. In the event of a robot failure, the framework repositions robots in a user-defined local neighborhood of the failed robot to compensate for the coverage loss. The central controller augments the team with additional robots from the robot pool when simply repositioning robots fails to attain a user-specified level of desired coverage. The size of the local neighborhood around the failed robot and the desired coverage over the region are two settings that can be manipulated to achieve a user-specified balance. We investigate the tradeoff between the coverage compensation achieved through local repositioning and the computation required to plan the new robot locations. We also study the relationship between the size of the local neighborhood and the number of additional robots added to the team for a given user-specified level of desired coverage. The computational complexity of our resilient strategy (tunable resilient coordination), is quadratic in both neighborhood size and number of robots in the team. At first glance, it seems that any desired level of coverage can be efficiently achieved by augmenting the robot team with more robots while keeping the neighborhood size fixed. However, we show that to reach a high level of coverage in a neighborhood with a large robot population, it is more efficient to enlarge the neighborhood size, instead of adding additional robots and repositioning them.
Interactive Differentiable Simulation
Heiden, Eric, Millard, David, Zhang, Hejia, Sukhatme, Gaurav S.
Intelligent agents need a physical understanding of the world to predict the impact of their actions in the future. While learning-based models of the environment dynamics have contributed to significant improvements in sample efficiency compared to model-free reinforcement learning algorithms, they typically fail to generalize to system states beyond the training data, while often grounding their predictions on non-interpretable latent variables. We introduce Interactive Differentiable Simulation (IDS), a differentiable physics engine, that allows for efficient, accurate inference of physical properties of rigid-body systems. Integrated into deep learning architectures, our model is able to accomplish system identification using visual input, leading to an interpretable model of the world whose parameters have physical meaning. We present experiments showing automatic task-based robot design and parameter estimation for nonlinear dynamical systems by automatically calculating gradients in IDS. When integrated into an adaptive model-predictive control algorithm, our approach exhibits orders of magnitude improvements in sample efficiency over model-free reinforcement learning algorithms on challenging nonlinear control domains.
A Solution to Time-Varying Markov Decision Processes
Liu, Lantao, Sukhatme, Gaurav S.
We consider a decision-making problem where the environment varies both in space and time. Such problems arise naturally when considering e.g., the navigation of an underwater robot amidst ocean currents or the navigation of an aerial vehicle in wind. To model such spatiotemporal variation, we extend the standard Markov Decision Process (MDP) to a new framework called the Time-Varying Markov Decision Process (TVMDP). The TVMDP has a time-varying state transition model and transforms the standard MDP that considers only immediate and static uncertainty descriptions of state transitions, to a framework that is able to adapt to future time-varying transition dynamics over some horizon. We show how to solve a TVMDP via a redesign of the MDP value propagation mechanisms by incorporating the introduced dynamics along the temporal dimension. We validate our framework in a marine robotics navigation setting using spatiotemporal ocean data and show that it outperforms prior efforts.
A Spatio-Temporal Representation for the Orienteering Problem with Time-Varying Profits
Ma, Zhibei, Yin, Kai, Liu, Lantao, Sukhatme, Gaurav S.
We consider an orienteering problem (OP) where an agent needs to visit a series (possibly a subset) of depots, from which the maximal accumulated profits are desired within given limited time budget. Different from most existing works where the profits are assumed to be static, in this work we investigate a variant that has arbitrary time-dependent profits. Specifically, the profits to be collected change over time and they follow different (e.g., independent) time-varying functions. The problem is of inherent nonlinearity and difficult to solve by existing methods. To tackle the challenge, we present a simple and effective framework that incorporates time-variations into the fundamental planning process. Specifically, we propose a deterministic spatio-temporal representation where both spatial description and temporal logic are unified into one routing topology. By employing existing basic sorting and searching algorithms, the routing solutions can be computed in an extremely efficient way. The proposed method is easy to implement and extensive numerical results show that our approach is time efficient and generates near-optimal solutions.