Much Ado About Noising: Dispelling the Myths of Generative Robotic Control
Pan, Chaoyi, Anantharaman, Giri, Huang, Nai-Chieh, Jin, Claire, Pfrommer, Daniel, Yuan, Chenyang, Permenter, Frank, Qu, Guannan, Boffi, Nicholas, Shi, Guanya, Simchowitz, Max
–arXiv.org Artificial Intelligence
Long-horizon, dexterous manipulation tasks such as furniture assembly, food preparation, and manufacturing have been a holy grail in robotics. Recent large robot action models (T eam et al., 2025; Black et al., 2024; Kim et al., 2024) have made substantial breakthroughs towards these goals by imitating expert demonstrations of diverse qualities. We provide a more comprehensive review of related work in Section 6, but highlight here a key trend: while supervised learning from demonstration, also known as behavior cloning (BC), has been applied across domains for decades (Pomerleau, 1988), its recent success in robotic manipulation has coincided with the adoption of what we term generative control policies (GCPs): robotic control policies that use generative modeling architectures, such as diffusion models, flow models, and autoregressive transformers, as parameterizations of the mapping from observation to action. Given the seemingly transformative nature of GCPs for robot learning, there has been much speculation about the origin of their superior performance relative to policies trained with a regression loss, henceforth regression control policies (RCPs). GCPs, by modeling conditional distributions over actions, are uniquely suited to the multi-task pretraining paradigm popular in today's large robotic models.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Europe > Romania
- Black Sea (0.04)
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- Massachusetts > Middlesex County
- Europe > Romania
- Genre:
- Research Report > New Finding (0.92)
- Industry:
- Education (0.47)
- Technology: