SPGrasp: Spatiotemporal Prompt-driven Grasp Synthesis in Dynamic Scenes
Mei, Yunpeng, Cao, Hongjie, Xia, Yinqiu, Xiao, Wei, Feng, Zhaohan, Wang, Gang, Chen, Jie
–arXiv.org Artificial Intelligence
Real-time interactive grasp synthesis for dynamic objects remains challenging as existing methods fail to achieve low-latency inference while maintaining promptability. To bridge this gap, we propose SPGrasp (spatiotemporal prompt-driven dynamic grasp synthesis), a novel framework extending segment anything model v2 (SAMv2) for video stream grasp estimation. Our core innovation integrates user prompts with spatiotemporal context, enabling real-time interaction with end-to-end latency as low as 59 ms while ensuring temporal consistency for dynamic objects. In benchmark evaluations, SPGrasp achieves instance-level grasp accuracies of 90.6% on OCID and 93.8% on Jacquard. On the challenging GraspNet-1Billion dataset under continuous tracking, SPGrasp achieves 92.0% accuracy with 73.1 ms per-frame latency, representing a 58.5% reduction compared to the prior state-of-the-art promptable method RoG-SAM while maintaining competitive accuracy. Real-world experiments involving 13 moving objects demonstrate a 94.8% success rate in interactive grasping scenarios. These results confirm SPGrasp effectively resolves the latency-interactivity trade-off in dynamic grasp synthesis.
arXiv.org Artificial Intelligence
Sep-3-2025
- Genre:
- Research Report > New Finding (0.66)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning (1.00)
- Representation & Reasoning (0.68)
- Robots (0.98)
- Vision (1.00)
- Sensing and Signal Processing > Image Processing (0.68)
- Artificial Intelligence
- Information Technology