Goto

Collaborating Authors

 graspness



Active Perception for Grasp Detection via Neural Graspness Field

Neural Information Processing Systems

This paper tackles the challenge of active perception for robotic grasp detection in cluttered environments. Incomplete 3D geometry information can negatively affect the performance of learning-based grasp detection methods, and scanning the scene from multiple views introduces significant time costs. To achieve reliable grasping performance with efficient camera movement, we propose an active grasp detection framework based on the Neural Graspness Field (NGF), which models the scene incrementally and facilitates next-best-view planning.



FineGrasp: Towards Robust Grasping for Delicate Objects

Du, Yun, Zhao, Mengao, Lin, Tianwei, Jin, Yiwei, Huang, Chaodong, Su, Zhizhong

arXiv.org Artificial Intelligence

Recent advancements in robotic grasping have led to its integration as a core module in many manipulation systems. For instance, language-driven semantic segmentation enables the grasping of any designated object or object part. However, existing methods often struggle to generate feasible grasp poses for small objects or delicate components, potentially causing the entire pipeline to fail. To address this issue, we propose a novel grasping method, FineGrasp, which introduces improvements in three key aspects. First, we introduce multiple network modifications to enhance the ability of to handle delicate regions. Second, we address the issue of label imbalance and propose a refined graspness label normalization strategy. Third, we introduce a new simulated grasp dataset and show that mixed sim-to-real training further improves grasp performance. Experimental results show significant improvements, especially in grasping small objects, and confirm the effectiveness of our system in semantic grasping.


DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes

Zhang, Jialiang, Liu, Haoran, Li, Danshi, Yu, Xinqiang, Geng, Haoran, Ding, Yufei, Chen, Jiayi, Wang, He

arXiv.org Artificial Intelligence

Grasping in cluttered scenes remains highly challenging for dexterous hands due to the scarcity of data. To address this problem, we present a large-scale synthetic benchmark, encompassing 1319 objects, 8270 scenes, and 427 million grasps. Beyond benchmarking, we also propose a novel two-stage grasping method that learns efficiently from data by using a diffusion model that conditions on local geometry. Our proposed generative method outperforms all baselines in simulation experiments. Furthermore, with the aid of test-time-depth restoration, our method demonstrates zero-shot sim-to-real transfer, attaining 90.7% real-world dexterous grasping success rate in cluttered scenes.


Graspness Discovery in Clutters for Fast and Accurate Grasp Detection

Wang, Chenxi, Fang, Hao-Shu, Gou, Minghao, Fang, Hongjie, Gao, Jin, Lu, Cewu

arXiv.org Artificial Intelligence

Efficient and robust grasp pose detection is vital for robotic manipulation. For general 6 DoF grasping, conventional methods treat all points in a scene equally and usually adopt uniform sampling to select grasp candidates. However, we discover that ignoring where to grasp greatly harms the speed and accuracy of current grasp pose detection methods. In this paper, we propose "graspness", a quality based on geometry cues that distinguishes graspable areas in cluttered scenes. A look-ahead searching method is proposed for measuring the graspness and statistical results justify the rationality of our method. To quickly detect graspness in practice, we develop a neural network named cascaded graspness model to approximate the searching process. Extensive experiments verify the stability, generality and effectiveness of our graspness model, allowing it to be used as a plug-and-play module for different methods. A large improvement in accuracy is witnessed for various previous methods after equipping our graspness model. Moreover, we develop GSNet, an end-to-end network that incorporates our graspness model for early filtering of low-quality predictions. Experiments on a large-scale benchmark, GraspNet-1Billion, show that our method outperforms previous arts by a large margin (30+ AP) and achieves a high inference speed. The library of GSNet has been integrated into AnyGrasp, which is at https://github.com/graspnet/anygrasp_sdk.


Improving Robotic Grasping Ability Through Deep Shape Generation

Jiang, Junnan, Tu, Yuyang, Xiao, Xiaohui, Fu, Zhongtao, Zhang, Jianwei, Chen, Fei, Li, Miao

arXiv.org Artificial Intelligence

Data-driven approaches have become a dominant paradigm for robotic grasp planning. However, the performance of these approaches is enormously influenced by the quality of the available training data. In this paper, we propose a framework to generate object shapes to improve the grasping dataset quality, thus enhancing the grasping ability of a pre-designed learning-based grasp planning network. In this framework, the object shapes are embedded into a low-dimensional feature space using an AutoEncoder (encoder-decoder) based structure network. The rarity and graspness scores are defined for each object shape using outlier detection and grasp-quality criteria. Subsequently, new object shapes are generated in feature space that leverages the original high rarity and graspness score objects' features, which can be employed to augment the grasping dataset. Finally, the results obtained from the simulation and real-world experiments demonstrate that the grasping ability of the learning-based grasp planning network can be effectively improved with the generated object shapes.