Quantization-Free Autoregressive Action Transformer
Sheebaelhamd, Ziyad, Tschannen, Michael, Muehlebach, Michael, Vernade, Claire
–arXiv.org Artificial Intelligence
Psenka et al., 2023), which will be discussed in the next two paragraphs. Current transformer-based imitation learning approaches introduce discrete action representations Existing autoregressive policies, on the one hand, sidestep and train an autoregressive transformer decoder the challenge of learning in a continuous domain by discretizing on the resulting latent code. However, the initial the actions (Lee et al., 2024; Shafiullah et al., quantization breaks the continuous structure of the 2022). This discretization can introduce several drawbacks: action space thereby limiting the capabilities of It discards the inherent structure of the continuous space, the generative model. We propose a quantizationfree increases complexity by adding a separate quantization step, method instead that leverages Generative and may limit expressiveness or accuracy when fine-grained Infinite-Vocabulary Transformers (GIVT) as a direct, control is required.
arXiv.org Artificial Intelligence
Mar-18-2025
- Country:
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Genre:
- Research Report > New Finding (0.46)
- Technology: