Scalable Diffusion Models with Transformers

Peebles, William, Xie, Saining

arXiv.org Artificial Intelligence 

We explore a new class of diffusion models based on the Machine learning is experiencing a renaissance powered transformer architecture. We train latent diffusion models by transformers. Over the past five years, neural architectures of images, replacing the commonly-used U-Net backbone for natural language processing [8, 42], vision [10] with a transformer that operates on latent patches. We analyze and several other domains have largely been subsumed by the scalability of our Diffusion Transformers (DiTs) transformers [60]. Many classes of image-level generative through the lens of forward pass complexity as measured by models remain holdouts to the trend, though--while Gflops. We find that DiTs with higher Gflops--through increased transformers see widespread use in autoregressive models transformer depth/width or increased number of input [3,6,43,47], they have seen less adoption in other generative tokens--consistently have lower FID.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found