DiSPo: Diffusion-SSM based Policy Learning for Coarse-to-Fine Action Discretization

Oh, Nayoung, Jung, Moonkyeong, Park, Daehyung

arXiv.org Artificial Intelligence 

Abstract-- We aim to solve the problem of generating coarseto-fine skills learning from demonstrations (LfD). To scale precision, traditional LfD approaches often rely on extensive fine-grained demonstrations with external interpolations or dynamics models with limited generalization capabilities. For memory-efficient learning and convenient granularity change, we propose a novel diffusion-SSM based policy (DiSPo) that learns from diverse coarse skills and produces varying control scales of actions by leveraging a state-space model, Mamba. Our evaluations show the adoption of Mamba and the proposed step-scaling method enables DiSPo to outperform in five coarseto-fine benchmark tests while DiSPo shows decent performance in typical fine-grained motion learning and reproduction. We finally demonstrate the scalability of actions with simulation and real-world manipulation tasks. In typical object manipulation, small imprecision around local regions often leads to the failure of entire tasks, such Figure 1: A capture of a square-drawing task that requires as robot welding, screwing, and drawing, as shown in Figure 1.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found