Addressing Negative Transfer in Diffusion Models
–Neural Information Processing Systems
Diffusion-based generative models have achieved remarkable success in various domains. It trains a shared model on denoising tasks that encompass different noise levels simultaneously, representing a form of multi-task learning (MTL). However, analyzing and improving diffusion models from an MTL perspective remains under-explored. In particular, MTL can sometimes lead to the well-known phenomenon of \textit{negative transfer}, which results in the performance degradation of certain tasks due to conflicts between tasks. In this paper, we first aim to analyze diffusion training from an MTL standpoint, presenting two key observations: \textbf{(O1)} the task affinity between denoising tasks diminishes as the gap between noise levels widens, and \textbf{(O2)} negative transfer can arise even in diffusion training.
Neural Information Processing Systems
Jan-18-2025, 10:16:10 GMT
- Technology: