Highly Compressed Tokenizer Can Generate Without Training
Beyer, L. Lao, Li, T., Chen, X., Karaman, S., He, K.
–arXiv.org Artificial Intelligence
Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called 1D image tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations -- such as copying and replacing tokens between latent representations of images -- enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer's latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.
arXiv.org Artificial Intelligence
Jun-11-2025
- Country:
- Asia > Middle East
- Republic of Türkiye > Karaman Province > Karaman (0.04)
- North America
- Canada (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.67)
- Technology: