Geometry Matching for Multi-Embodiment Grasping
Attarian, Maria, Asif, Muhammad Adil, Liu, Jingzhou, Hari, Ruthrash, Garg, Animesh, Gilitschenski, Igor, Tompson, Jonathan
–arXiv.org Artificial Intelligence
Many existing learning-based grasping approaches concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. We tackle the problem of grasping using multiple embodiments by learning rich geometric representations for both objects and end-effectors using Graph Neural Networks. Our novel method - GeoMatch - applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive predictions of grasps keypoint-by-keypoint. We compare our method against baselines that support multiple embodiments. Our approach performs better across three end-effectors, while also producing diverse grasps. Examples, including real robot demos, can be found at geo-match.github.io.
arXiv.org Artificial Intelligence
Dec-6-2023
- Country:
- North America > Canada > Ontario > Toronto (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Technology: