Neural Robot Dynamics
Xu, Jie, Heiden, Eric, Akinola, Iretiayo, Fox, Dieter, Macklin, Miles, Narang, Yashraj
–arXiv.org Artificial Intelligence
Simulation plays a crucial role in various robotics applications, such as policy learning [1, 2, 3, 4, 5, 6, 7], safe and scalable robotic control evaluation [8, 9, 10, 11], and computational optimization of robot designs [12, 13, 14]. Recently, neural robotics simulators have emerged as a promising alternative to traditional analytical simulators, as neural simulators can efficiently predict robot dynamics and learn intricate physics from real-world data. For instance, neural simulators have been leveraged to capture complex interactions challenging for analytical modeling [15, 16, 17, 18], or have served as learned world models to facilitate sample-efficient policy learning [19, 20]. However, existing neural robotics simulators typically require application-specific training, often assuming fixed environments [20, 21] or simultaneous training alongside control policies [22, 23]. These limitations primarily stem from their end-to-end frameworks with inadequate representations of the global simulation state, i.e., neural models often substitute the entire classical simulator and directly map robot state and control actions ( e.g., target joint positions, target link orientations) to the robot's next state. Without encoding the environment in the state representation, the learned simulators have to implicitly memorize the task and environment details. Additionally, utilizing controller actions as input causes the simulators to overfit to particular low-level controllers used during training. Consequently, unlike classical simulators, these neural simulators often fail to generalize to novel state distributions (induced by new tasks), unseen environment setups, and customized controllers ( e.g., novel control laws or controller gains).
arXiv.org Artificial Intelligence
Aug-22-2025