DiTFastAttn: Attention Compression for Diffusion Transformer Models Pu Lu

Open in new window