Viewpoint-Agnostic Manipulation Policies with Strategic Vantage Selection

Vasudevan, Sreevishakh, Sagar, Som, Senanayake, Ransalu

arXiv.org Artificial Intelligence 

Abstract-- Since vision-based manipulation policies are typically trained from data gathered from a single viewpoint, their performance drops when the view changes during deployment. Naively aggregating demonstrations from numerous random views is not only costly but also known to destabilize learning, as excessive visual diversity acts as noise. We present V antage, a viewpoint selection framework to fine-tune any pre-trained policy on a small, strategically chosen set of camera poses to induce viewpoint-agnostic behavior . Instead of relying on costly brute-force search over viewpoints, V antage formulates camera placement as an information gain optimization problem in a continuous space. This approach balances exploration of novel poses with exploitation of promising ones, while also providing theoretical guarantees about convergence and robustness. Across manipulation tasks and policy families, V antage consistently improves success under viewpoint shifts compared to fixed, grid, or random data selection strategies with only a handful of fine-tuning steps. Experiments conducted on simulated and real-world setups show that V antage increases the task success rate by 25% for diffusion policies, and yields robust gains in dynamic-camera settings. I. INTRODUCTION Modern robot manipulation policies trained with visual inputs have achieved levels of precision and adaptability that were once considered far-fetched.