Monocular One-Shot Metric-Depth Alignment for RGB-Based Robot Grasping
Guo, Teng, Huang, Baichuan, Yu, Jingjin
–arXiv.org Artificial Intelligence
Accurate 6D object pose estimation is a prerequisite for successfully completing robotic prehensile and non-prehensile manipulation tasks. At present, 6D pose estimation for robotic manipulation generally relies on depth sensors based on, e.g., structured light, time-of-flight, and stereo-vision, which can be expensive, produce noisy output (as compared with RGB cameras), and fail to handle transparent objects. On the other hand, state-of-the-art monocular depth estimation models (MDEMs) provide only affine-invariant depths up to an unknown scale and shift. Metric MDEMs achieve some successful zero-shot results on public datasets, but fail to generalize. We propose a novel framework, Monocular One-shot Metric-depth Alignment (MOMA), to recover metric depth from a single RGB image, through a one-shot adaptation building on MDEM techniques. MOMA performs scale-rotation-shift alignments during camera calibration, guided by sparse ground-truth depth points, enabling accurate depth estimation without additional data collection or model retraining on the testing setup. MOMA supports fine-tuning the MDEM on transparent objects, demonstrating strong generalization capabilities. Real-world experiments on tabletop 2-finger grasping and suction-based bin-picking applications show MOMA achieves high success rates in diverse tasks, confirming its effectiveness.
arXiv.org Artificial Intelligence
Jun-23-2025
- Country:
- Asia > Japan
- Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Europe > Italy
- North America > United States
- New Jersey > Middlesex County > Piscataway (0.04)
- Asia > Japan
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Robots > Manipulation (0.50)
- Vision > Image Understanding (0.77)
- Information Technology > Artificial Intelligence