Held, David
On Time-Indexing as Inductive Bias in Deep RL for Sequential Manipulation Tasks
Qureshi, M. Nomaan, Eisner, Ben, Held, David
In standard policy learning, a single neural-network based policy is tasked with learning both of these skills (and learning to switch between them), without any access to structures that explicitly encode the multi-modal nature of task space.Ideally, policies would be able to emergently learn to decompose tasks at different levels of abstraction, and factor the task learning into unique skills. One common approach is to try and jointly learn a set of subskills, as well as a selection function which selects a specific subskill to execute at the current time step [5]. This poses a fundamental bootstrapping issue: as the skills change and improve, the selection function must change and improve as well, which can lead to unstable training. An important observation of many optimal policies for manipulation tasks is that skills tend to be executed in sequence, without backtracking. Therefore, time itself can serve as a useful indicator for skill selection. For instance, while executing a stacking task, it is reasonable to assume that the robot will undertake the'reach' skill at the start of the task, and subsequently perform the'stack' skill towards the end of the task. Our intuition here is that selecting the'skill' according to which time-step we are currently at can be used as a good strategy for selecting
Neural Grasp Distance Fields for Robot Manipulation
Weng, Thomas, Held, David, Meier, Franziska, Mukadam, Mustafa
Abstract-- We formulate grasp learning as a neural field and present Neural Grasp Distance Fields (NGDF). Here, the input is a 6D pose of a robot end effector and output is a distance to a continuous manifold of valid grasps for an object. In contrast to current approaches that predict a set of discrete candidate grasps, the distance-based NGDF representation is easily interpreted as a cost, and minimizing this cost produces a successful grasp pose. This grasp distance cost can be incorporated directly into a trajectory optimizer for joint optimization with other costs such as trajectory smoothness and collision avoidance. Figure 1: (a) Existing grasp estimation methods produce discrete grasp We evaluate NGDF on joint grasp and motion planning in sets which do not represent the true continuous manifold of possible simulation and the real world, outperforming baselines by 63% grasps. This distance can be leveraged as a cost for optimization, facilitating joint grasp and motion planning. We present Neural Grasp Distance Fields (NGDF), which optimization results in a smooth, collision-free trajectory that model the continuous manifold of valid grasp poses as the reaches a valid grasp pose.
Object Importance Estimation using Counterfactual Reasoning for Intelligent Driving
Gupta, Pranay, Biswas, Abhijat, Admoni, Henny, Held, David
The ability to identify important objects in a complex and dynamic driving environment is essential for autonomous driving agents to make safe and efficient driving decisions. It also helps assistive driving systems decide when to alert drivers. We tackle object importance estimation in a data-driven fashion and introduce HOIST - Human-annotated Object Importance in Simulated Traffic. HOIST contains driving scenarios with human-annotated importance labels for vehicles and pedestrians. We additionally propose a novel approach that relies on counterfactual reasoning to estimate an object's importance. We generate counterfactual scenarios by modifying the motion of objects and ascribe importance based on how the modifications affect the ego vehicle's driving. Our approach outperforms strong baselines for the task of object importance estimation on HOIST. We also perform ablation studies to justify our design choices and show the significance of the different components of our proposed approach.
RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation
Wang, Yufei, Xian, Zhou, Chen, Feng, Wang, Tsun-Hsuan, Wang, Yian, Erickson, Zackory, Held, David, Gan, Chuang
We present RoboGen, a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation. RoboGen leverages the latest advancements in foundation and generative models. Instead of directly using or adapting these models to produce policies or low-level actions, we advocate for a generative scheme, which uses these models to automatically generate diversified tasks, scenes, and training supervisions, thereby scaling up robotic skill learning with minimal human supervision. Our approach equips a robotic agent with a self-guided propose-generate-learn cycle: the agent first proposes interesting tasks and skills to develop, and then generates corresponding simulation environments by populating pertinent objects and assets with proper spatial configurations. Afterwards, the agent decomposes the proposed high-level task into sub-tasks, selects the optimal learning approach (reinforcement learning, motion planning, or trajectory optimization), generates required training supervision, and then learns policies to acquire the proposed skill. Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics. Our fully generative pipeline can be queried repeatedly, producing an endless stream of skill demonstrations associated with diverse tasks and environments.
Force-Constrained Visual Policy: Safe Robot-Assisted Dressing via Multi-Modal Sensing
Sun, Zhanyi, Wang, Yufei, Held, David, Erickson, Zackory
Robot-assisted dressing could profoundly enhance the quality of life of adults with physical disabilities. To achieve this, a robot can benefit from both visual and force sensing. The former enables the robot to ascertain human body pose and garment deformations, while the latter helps maintain safety and comfort during the dressing process. In this paper, we introduce a new technique that leverages both vision and force modalities for this assistive task. Our approach first trains a vision-based dressing policy using reinforcement learning in simulation with varying body sizes, poses, and types of garments. We then learn a force dynamics model for action planning to ensure safety. Due to limitations of simulating accurate force data when deformable garments interact with the human body, we learn a force dynamics model directly from real-world data. Our proposed method combines the vision-based policy, trained in simulation, with the force dynamics model, learned in the real world, by solving a constrained optimization problem to infer actions that facilitate the dressing process without applying excessive force on the person. We evaluate our system in simulation and in a real-world human study with 10 participants across 240 dressing trials, showing it greatly outperforms prior baselines. Video demonstrations are available on our project website\footnote{\url{https://sites.google.com/view/dressing-fcvp}}.
HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation
Zhou, Wenxuan, Jiang, Bowen, Yang, Fan, Paxton, Chris, Held, David
Manipulating objects without grasping them is an essential component of human dexterity, referred to as non-prehensile manipulation. Non-prehensile manipulation may enable more complex interactions with the objects, but also presents challenges in reasoning about gripper-object interactions. In this work, we introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects using point cloud observations. HACMan proposes a temporally-abstracted and spatially-grounded object-centric action representation that consists of selecting a contact location from the object point cloud and a set of motion parameters describing how the robot will move after making contact. We modify an existing off-policy RL algorithm to learn in this hybrid discrete-continuous action representation. We evaluate HACMan on a 6D object pose alignment task in both simulation and in the real world. On the hardest version of our task, with randomized initial poses, randomized 6D goals, and diverse object categories, our policy demonstrates strong generalization to unseen object categories without a performance drop, achieving an 89% success rate on unseen objects in simulation and 50% success rate with zero-shot transfer in the real world. Compared to alternative action representations, HACMan achieves a success rate more than three times higher than the best baseline. With zero-shot sim2real transfer, our policy can successfully manipulate unseen objects in the real world for challenging non-planar goals, using dynamic and contact-rich non-prehensile skills. Videos can be found on the project website: https://hacman-2023.github.io.
Reinforcement Learning in a Safety-Embedded MDP with Trajectory Optimization
Yang, Fan, Zhou, Wenxuan, Liu, Zuxin, Zhao, Ding, Held, David
Safe Reinforcement Learning (RL) plays an important role in applying RL algorithms to safety-critical real-world applications, addressing the trade-off between maximizing rewards and adhering to safety constraints. This work introduces a novel approach that combines RL with trajectory optimization to manage this trade-off effectively. Our approach embeds safety constraints within the action space of a modified Markov Decision Process (MDP). The RL agent produces a sequence of actions that are transformed into safe trajectories by a trajectory optimizer, thereby effectively ensuring safety and increasing training stability. This novel approach excels in its performance on challenging Safety Gym tasks, achieving significantly higher rewards and near-zero safety violations during inference. The method's real-world applicability is demonstrated through a safe and effective deployment in a real robot task of box-pushing around obstacles.
Learning Generalizable Tool-use Skills through Trajectory Generation
Qi, Carl, Shetty, Sarthak, Lin, Xingyu, Held, David
Autonomous systems that efficiently utilize tools can assist humans in completing many common tasks such as cooking and cleaning. However, current systems fall short of matching human-level of intelligence in terms of adapting to novel tools. Prior works based on affordance often make strong assumptions about the environments and cannot scale to more complex, contact-rich tasks. In this work, we tackle this challenge and explore how agents can learn to use previously unseen tools to manipulate deformable objects. We propose to learn a generative model of the tool-use trajectories as a sequence of point clouds, which generalizes to different tool shapes. Given any novel tool, we first generate a tool-use trajectory and then optimize the sequence of tool poses to align with the generated trajectory. We train a single model for four different challenging deformable object manipulation tasks. Our model is trained with demonstration data from just a single tool for each task and is able to generalize to various novel tools, significantly outperforming baselines. Additional materials can be found on our project website: https://sites.google.com/view/toolgen.
FlowBot++: Learning Generalized Articulated Objects Manipulation via Articulation Projection
Zhang, Harry, Eisner, Ben, Held, David
Understanding and manipulating articulated objects, such as doors and drawers, is crucial for robots operating in human environments. We wish to develop a system that can learn to articulate novel objects with no prior interaction, after training on other articulated objects. Previous approaches for articulated object manipulation rely on either modular methods which are brittle or end-to-end methods, which lack generalizability. This paper presents FlowBot++, a deep 3D vision-based robotic system that predicts dense per-point motion and dense articulation parameters of articulated objects to assist in downstream manipulation tasks. FlowBot++ introduces a novel per-point representation of the articulated motion and articulation parameters that are combined to produce a more accurate estimate than either method on their own. Simulated experiments on the PartNet-Mobility dataset validate the performance of our system in articulating a wide range of objects, while real-world experiments on real objects' point clouds and a Sawyer robot demonstrate the generalizability and feasibility of our system in real-world scenarios.
Bagging by Learning to Singulate Layers Using Interactive Perception
Chen, Lawrence Yunliang, Shi, Baiyu, Lin, Roy, Seita, Daniel, Ahmad, Ayah, Cheng, Richard, Kollar, Thomas, Held, David, Goldberg, Ken
Many fabric handling and 2D deformable material tasks in homes and industry require singulating layers of material such as opening a bag or arranging garments for sewing. In contrast to methods requiring specialized sensing or end effectors, we use only visual observations with ordinary parallel jaw grippers. We propose SLIP: Singulating Layers using Interactive Perception, and apply SLIP to the task of autonomous bagging. We develop SLIP-Bagging, a bagging algorithm that manipulates a plastic or fabric bag from an unstructured state, and uses SLIP to grasp the top layer of the bag to open it for object insertion. In physical experiments, a YuMi robot achieves a success rate of 67% to 81% across bags of a variety of materials, shapes, and sizes, significantly improving in success rate and generality over prior work. Experiments also suggest that SLIP can be applied to tasks such as singulating layers of folded cloth and garments. Supplementary material is available at https://sites.google.com/view/slip-bagging/.