StochSync: Stochastic Diffusion Synchronization for Image Generation in Arbitrary Spaces

Yeo, Kyeongmin, Kim, Jaihoon, Sung, Minhyuk

arXiv.org Artificial Intelligence 

Figure 1: Assorted mesh textures and panoramas generated using StochSync, including one in the background (environment map), which is a 360 panorama. StochSync extends the capabilities of image diffusion models trained in square spaces to produce images in arbitrary spaces such as cylinders, spheres, tori, and mesh surfaces. We propose a zero-shot method for generating images in arbitrary spaces (e.g., a sphere for 360 The zero-shot generation of various visual content using a pretrained image diffusion model has been explored mainly in two directions. First, Diffusion Synchronization-performing reverse diffusion processes jointly across different projected spaces while synchronizing them in the target space-generates high-quality outputs when enough conditioning is provided, but it struggles in its absence. Second, Score Distillation Sampling-gradually updating the target space data through gradient descent-results in better coherence but often lacks detail. In this paper, we reveal for the first time the interconnection between these two methods while highlighting their differences. To this end, we propose StochSync, a novel approach that combines the strengths of both, enabling effective performance with weak conditioning. Project page is at https: //stochsync.github.io/. Diffusion models pretrained on billions of images (Rombach et al., 2022; Midjourney) have demonstrated remarkable capabilities in various zero-shot applications.