Vi-TacMan: Articulated Object Manipulation via Vision and Touch
Cui, Leiyao, Zhao, Zihang, Xie, Sirui, Zhang, Wenhuan, Han, Zhi, Zhu, Yixin
–arXiv.org Artificial Intelligence
Autonomous manipulation of articulated objects remains a fundamental challenge for robots in human environments. Vision-based methods can infer hidden kinematics but can yield imprecise estimates on unfamiliar objects. Tactile approaches achieve robust control through contact feedback but require accurate initialization. This suggests a natural synergy: vision for global guidance, touch for local precision. Yet no framework systematically exploits this complementarity for generalized articulated manipulation. Here we present Vi-TacMan, which uses vision to propose grasps and coarse directions that seed a tactile controller for precise execution. By incorporating surface normals as geometric priors and modeling directions via von Mises-Fisher distributions, our approach achieves significant gains over baselines (all p<0.0001). Critically, manipulation succeeds without explicit kinematic models -- the tactile controller refines coarse visual estimates through real-time contact regulation. Tests on more than 50,000 simulated and diverse real-world objects confirm robust cross-category generalization. This work establishes that coarse visual cues suffice for reliable manipulation when coupled with tactile feedback, offering a scalable paradigm for autonomous systems in unstructured environments.
arXiv.org Artificial Intelligence
Oct-9-2025
- Genre:
- Research Report
- Experimental Study (0.34)
- New Finding (0.34)
- Research Report
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Representation & Reasoning (0.88)
- Robots > Manipulation (0.68)
- Information Technology > Artificial Intelligence