Goto

Collaborating Authors

UniGrasp: Learning a Unified Model to Grasp with N-Fingered Robotic Hands

arXiv.org Artificial Intelligence

To achieve a successful grasp, gripper attributes including geometry and kinematics play a role equally important to the target object geometry. The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand. We propose UniGrasp, an efficient data-driven grasp synthesis method that considers both the object geometry and gripper attributes as inputs. UniGrasp is based on a novel deep neural network architecture that selects sets of contact points from the input point cloud of the object. The proposed model is trained on a large dataset to produce contact points that are in force closure and reachable by the robot hand. By using contact points as output, we can transfer between a diverse set of N-fingered robotic hands. Our model produces over 90 percent valid contact points in Top10 predictions in simulation and more than 90 percent successful grasps in the real world experiments for various known two-fingered and three-fingered grippers. Our model also achieves 93 percent and 83 percent successful grasps in the real world experiments for a novel two-fingered and five-fingered anthropomorphic robotic hand, respectively.


Learning to Regrasp by Learning to Place

arXiv.org Artificial Intelligence

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses. Regrasping is needed whenever a robot's current grasp pose fails to perform desired manipulation tasks. Endowing robots with such an ability has applications in many domains such as manufacturing or domestic services. Yet, it is a challenging task due to the large diversity of geometry in everyday objects and the high dimensionality of the state and action space. In this paper, we propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations to transform an initial object grasp pose to the desired object grasp poses. The key technique includes a neural stable placement predictor and a regrasp graph based solution through leveraging and changing the surrounding environment. We introduce a new and challenging synthetic dataset for learning and evaluating the proposed approach. In this dataset, we show that our system is able to achieve 73.3% success rate of regrasping diverse objects.


Grasp classification system improves human-to-robot handovers

#artificialintelligence

Giving and taking objects to and from humans are fundamental capabilities for collaborative robots in a variety of applications. NVIDIA researchers are hoping to improve these human-to-robot handovers by thinking about them as a hand grasp classification problem. In a paper called "Human Grasp Classification for Reactive Human-to-Robot Handovers", researchers at NVIDIA's Seattle AI Robotics Research Lab describe a proof of concept they claim results in more fluent human-to-robot handovers compared to previous approaches. The system classifies a human's grasp and plans a robot's trajectory to take the object from the human's hand. To do this, the researchers developed a perception system that can accurately identify a hand and objects in a variety of poses.


CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

arXiv.org Artificial Intelligence

Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation. To achieve this, the entire framework is trained solely in simulation, including supervised training with synthetic label generation and self-supervised, hand-object interaction. In the context of this framework, this paper proposes a novel, object-centric canonical representation at the category level, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances. Extensive experiments on task-relevant grasping of densely-cluttered industrial objects are conducted in both simulation and real-world setups, demonstrating the effectiveness of the proposed framework. Code and data will be released upon acceptance at https://sites.google.com/view/catgrasp.


Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances

arXiv.org Artificial Intelligence

Abstract-- We present a strategy for designing and building very general robot manipulation systems involving the integration of a general-purpose task-and-motion planner with engineered and learned perception modules that estimate properties and affordances of unknown objects. Such systems are closedloop policies that map from RGB images, depth images, and robot joint encoder measurements to robot joint position commands. We show that following this strategy a task-and-motion planner can be used to plan intelligent behaviors even in the absence of a priori knowledge regarding the set of manipulable objects, their geometries, and their affordances. We explore several different ways of implementing such perceptual modules for segmentation, property detection, shape estimation, and grasp generation. We show how these modules are integrated within the PDDLStream task and motion planning framework. The goal is for all perceivable objects to be on a blue target region. The robot first finds and executes a plan that picks and places the cracker box on the blue target region. Our objective is to design and build robot policies that can interact robustly and safely with large collections of objects that are only partially observable, where the objects have The operation of our system, called M0M (Manipulation never been seen before and where achieving the goal may with Zero Models), is illustrated in Figure 1. The goal is require many coordinated actions, as in putting away all the for all objects to be on a blue target region.