FIFO-Diffusion: Generating Infinite Videos from Text without Training Jihwan Kim 1 Bohyung Han 1,2
–Neural Information Processing Systems
We propose a novel inference technique based on a pretrained diffusion model for text-conditional video generation. Our approach, called FIFO-Diffusion, is conceptually capable of generating infinitely long videos without additional training. This is achieved by iteratively performing diagonal denoising, which simultaneously processes a series of consecutive frames with increasing noise levels in a queue; our method dequeues a fully denoised frame at the head while enqueuing a new random noise frame at the tail. However, diagonal denoising is a double-edged sword as the frames near the tail can take advantage of cleaner frames by forward reference but such a strategy induces the discrepancy between training and inference. Hence, we introduce latent partitioning to reduce the training-inference gap and lookahead denoising to leverage the benefit of forward referencing. Practically, FIFO-Diffusion consumes a constant amount of memory regardless of the target video length given a baseline model, while well-suited for parallel inference on multiple GPUs. We have demonstrated the promising results and effectiveness of the proposed methods on existing text-to-video generation baselines.
Neural Information Processing Systems
Mar-26-2025, 12:11:30 GMT
- Country:
- Asia (0.14)
- South America (0.14)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Technology: