Training Generative Adversarial Networks by Solving Ordinary Differential Equations
–Neural Information Processing Systems
The instability of Generative Adversarial Network (GAN) training has frequently been attributed to gradient descent. Consequently, recent methods have aimed to tailor the models and training procedures to stabilise the discrete updates. In contrast, we study the continuous-time dynamics induced by GAN training. Both theory and toy experiments suggest that these dynamics are in fact surprisingly stable. From this perspective, we hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics.
generative adversarial network, ordinary differential equation, training generative adversarial network, (2 more...)
Neural Information Processing Systems
Oct-10-2024, 00:23:17 GMT
- Technology: