DiTFastAttn: Attention Compression for Diffusion Transformer Models

Open in new window