nuwa-infinity
NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis
Infinite visual synthesis aims to generate high-resolution images, long-duration videos, and even visual generation of infinite size. Some recent work tried to solve this task by first dividing data into processable patches and then training the models on them without considering the dependencies between patches. However, since they fail to model global dependencies between patches, the quality and consistency of the generation can be limited. To address this issue, we propose NUWA-Infinity, a patch-level \emph{``render-and-optimize''} strategy for infinite visual synthesis. Given a large image or a long video, NUWA-Infinity first splits it into non-overlapping patches and uses the ordered patch chain as a complete training instance, a rendering model autoregressively predicts each patch based on its contexts. Once a patch is predicted, it is optimized immediately and its hidden states are saved as contexts for the next \emph{``render-and-optimize''} process. This brings two advantages: ($i$) The autoregressive rendering process with information transfer between contexts provides an implicit global probabilistic distribution modeling; ($ii$) The timely optimization process alleviates the optimization stress of the model and helps convergence. Based on the above designs, NUWA-Infinity shows a strong synthesis ability on high-resolution images and long-duration videos.
NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis
Infinite visual synthesis aims to generate high-resolution images, long-duration videos, and even visual generation of infinite size. Some recent work tried to solve this task by first dividing data into processable patches and then training the models on them without considering the dependencies between patches. However, since they fail to model global dependencies between patches, the quality and consistency of the generation can be limited. To address this issue, we propose NUWA-Infinity, a patch-level \emph{ render-and-optimize''} strategy for infinite visual synthesis. Given a large image or a long video, NUWA-Infinity first splits it into non-overlapping patches and uses the ordered patch chain as a complete training instance, a rendering model autoregressively predicts each patch based on its contexts.
Now Microsoft wants a share of the 'AI image generator' pie
Text-to-image generative models like OpenAI's DALL-E 2 are attracting significant attention because of their ability to produce images merely based on text prompts. While DALL-E 2 is the most popular, there are other budding AI image generators such as Ultraleap's'Midjourney', Hugging Face's'Craiyon', Meta's'Make-A-Scene' and Google's'Imagen'. Now, it seems that Microsoft also wants a share of the'AI image generator' pie. Recently, Microsoft's Asia research team introduced NUWA-Infinity, which is a multimodal generative model designed to generate high-quality images and videos from any given text, image or video input. In its research paper titled, 'NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis', Microsoft said that they evaluated NUWA-Infinity on five high-resolution visual synthesis tasks-- Compared to its predecessor'NUWA', which also covers images and videos, NUWA-Infinity has superior visual synthesis capabilities in terms of resolution and variable-size generation.