$\epsilon$-VAE: Denoising as Visual Decoding

Zhao, Long, Woo, Sanghyun, Wan, Ziyu, Li, Yandong, Zhang, Han, Gong, Boqing, Adam, Hartwig, Jia, Xuhui, Liu, Ting

arXiv.org Artificial Intelligence 

In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For highdimensional visual data, it reduces redundancy and emphasizes key features for high-quality generation. Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input. In this work, we offer a new perspective by proposing denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder. We evaluate our approach by assessing both reconstruction (rFID) and generation quality (FID), comparing it to state-of-theart autoencoding approach. We hope this work offers new insights into integrating iterative generation and autoencoding for improved compression and generation. Generative modeling aims to capture the underlying distribution of training data, enabling realistic sample generation during inference. A key preprocessing step is tokenization, which converts raw data into discrete tokens or continuous latent representations. These compact representations allow models to efficiently learn complex patterns, enhancing the quality of generated outputs.