Diffusion Transformers with Representation Autoencoders
Zheng, Boyang, Ma, Nanye, Tong, Shengbang, Xie, Saining
–arXiv.org Artificial Intelligence
Latent generative modeling, where a pretrained autoencoder maps pixels into a latent space for the diffusion process, has become the standard strategy for Diffusion Transformers (DiT); however, the autoencoder component has barely evolved. Most DiTs continue to rely on the original V AE encoder, which introduces several limitations: outdated backbones that compromise architectural simplicity, low-dimensional latent spaces that restrict information capacity, and weak representations that result from purely reconstruction-based training and ultimately limit generative quality. In this work, we explore replacing the V AE with pretrained representation encoders (e.g., DINO, SigLIP, MAE) paired with trained decoders, forming what we term Representation Autoencoders (RAEs). These models provide both high-quality reconstructions and semantically rich latent spaces, while allowing for a scalable transformer-based architecture. Since these latent spaces are typically high-dimensional, a key challenge is enabling diffusion transformers to operate effectively within them. We analyze the sources of this difficulty, propose theoretically motivated solutions, and validate them empirically. Our approach achieves faster convergence without auxiliary representation alignment losses. Using a DiT variant equipped with a lightweight, wide DDT head, we achieve strong image generation results on ImageNet: 1.51 FID at 256 256 (no guidance) and 1.13 at both 256 256 and 512 512 (with guidance). RAE offers clear advantages and should be the new default for diffusion transformer training. Project page: rae-dit.github.io Figure 1: Representation Autoencoder (RAE) uses frozen pretrained representations as the encoder with a lightweight decoder to reconstruct input images without compression. RAE enables faster convergence and higher-quality samples in latent diffusion training compared to V AE-based models. The evolution of generative modeling has been driven by a continual redefinition of where and how models learn to represent data. Early pixel-space models sought to directly capture image statistics, but the emergence of latent diffusion (V ahdat et al., 2021; Rombach et al., 2022) reframed generation as a process operating within a learned, compact representation space. By diffusing in this space rather than in raw pixels, models such as Latent Diffusion Models (LDM) (Rombach et al., 2022) and Diffusion Transformers (DiT) (Peebles & Xie, 2023; Ma et al., 2024) achieve higher visual fidelity and efficiency, powering the most capable image and video generators of today. Despite progress in diffusion backbones, the autoencoder defining the latent space remains largely unchanged. The widely used SD-V AE (Rombach et al., 2022) still relies on heavy channel-wise 1 In addition, SD-V AE, built on a legacy convolutional design, remains computationally inefficient (see Figure 1).
arXiv.org Artificial Intelligence
Oct-14-2025