QuadKAN: KAN-Enhanced Quadruped Motion Control via End-to-End Reinforcement Learning
–arXiv.org Artificial Intelligence
Legged robots offer mobility where wheeled platforms fail, such as stairs, rubble, soft substrates, and cluttered indoor-outdoor settings, enabling applications in inspection, search and rescue, agriculture, and planetary exploration [1]. Robust locomotion control is therefore a foundational capability for practical quadrupedal systems, underpinning safe navigation and dependable operation across diverse terrains and disturbances [2]. Deep reinforcement learning (DRL) has emerged as a compelling paradigm for such control because it optimizes closed-loop policies through interaction and can produce adaptive behaviors [3]. A substantial body of prior work has focused on training blind controllers that rely exclusively on proprioceptive inputs such as inertial measurement units (IMUs) and joint feedback [4]. While these blind policies can traverse uneven and unknown terrains through large-scale simulation and domain randomization, they inherently lack foresight: without exteroceptive input, they respond only upon contact and struggle to proactively avoid obstacles or plan foot placement on irregular ground. Vision complements proprioception by providing anticipatory geometric information, enabling early detection of distant obstacles and terrain changes [5]. As a result, cross-modal policies that integrate proprioception with depth imaging have gained prominence, facilitating safer and more efficient locomotion through earlier trajectory adjustments. Most existing cross-modal pipelines adopt multilayer perceptrons (MLPs) for the proprioceptive encoder and for the decision head that fuses proprioception with vision.
arXiv.org Artificial Intelligence
Sep-9-2025