Goto

Collaborating Authors

Why spectral normalization stabilizes GANs: analysis and improvements

AIHub

Figure 1: Training instability is one of the biggest challenges in training GANs. Despite the existence of successful heuristics like Spectral Normalization (SN) for improving stability, it is poorly-understood why they work. In our research, we theoretically explain why SN stabilizes GAN training. Using these insights, we further propose a better normalization technique for improving GANs' stability called Bidirectional Scaled Spectral Normalization. Generative adversarial networks (GANs) are a class of popular generative models enabling many cutting-edge applications such as photorealistic image synthesis.


CMU's Latest Machine Learning Research Analyzes and Improves Spectral Normalization In GANs

#artificialintelligence

GANs (generative adversarial networks) are cutting-edge deep generative models that are best known for producing high-resolution, photorealistic photographs. The goal of GANs is to generate random samples from a target data distribution with only a small set of training examples available. This is accomplished by learning two functions: a generator G that maps random input noise to a generated sample, and a discriminator D that attempts to categorize input samples as accurate (i.e., from the training dataset) or fake (i.e., not from the training dataset) (i.e., produced by the generator). Despite its success in enhancing the sample quality of data-driven generative models, GANs' adversarial training adds to instability. Small changes in hyperparameters, as well as randomness in the optimization process, might cause training to fail.


Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

arXiv.org Machine Learning

Spectral normalization (SN) is a widely-used technique for improving the stability of Generative Adversarial Networks (GANs) by forcing each layer of the discriminator to have unit spectral norm. This approach controls the Lipschitz constant of the discriminator, and is empirically known to improve sample quality in many GAN architectures. However, there is currently little understanding of why SN is so effective. In this work, we show that SN controls two important failure modes of GAN training: exploding and vanishing gradients. Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization technique, proposed over two decades ago to control gradients in the training of deep neural networks. This connection helps to explain why the most popular implementation of SN for GANs requires no hyperparameter tuning, whereas stricter implementations of SN have poor empirical performance out-of-the-box. Unlike LeCun initialization which only controls gradient vanishing at the beginning of training, we show that SN tends to preserve this property throughout training. Finally, building on this theoretical understanding, we propose Bidirectional Spectral Normalization (BSN), a modification of SN inspired by Xavier initialization, a later improvement to LeCun initialization. Theoretically, we show that BSN gives better gradient control than SN. Empirically, we demonstrate that BSN outperforms SN in sample quality on several benchmark datasets, while also exhibiting better training stability.


Mean Spectral Normalization of Deep Neural Networks for Embedded Automation

arXiv.org Machine Learning

Deep Neural Networks (DNNs) have begun to thrive in the field of automation systems, owing to the recent advancements in standardising various aspects such as architecture, optimization techniques, and regularization. In this paper, we take a step towards a better understanding of Spectral Normalization (SN) and its potential for standardizing regularization of a wider range of Deep Learning models, following an empirical approach. We conduct several experiments to study their training dynamics, in comparison with the ubiquitous Batch Normalization (BN) and show that SN increases the gradient sparsity and controls the gradient variance. Furthermore, we show that SN suffers from a phenomenon, we call the mean-drift effect, which mitigates its performance. We, then, propose a weight reparameterization called as the Mean Spectral Normalization (MSN) to resolve the mean drift, thereby significantly improving the network's performance. Our model performs ~16% faster as compared to BN in practice, and has fewer trainable parameters. We also show the performance of our MSN for small, medium, and large CNNs - 3-layer CNN, VGG7 and DenseNet-BC, respectively - and unsupervised image generation tasks using Generative Adversarial Networks (GANs) to evaluate its applicability for a broad range of embedded automation tasks.


Solving the Vanishing Gradient Problem with Self-Normalizing Neural Networks using Keras

#artificialintelligence

Training deep neural networks can be a challenging task, especially for very deep models. A major part of this difficulty is due to the instability of the gradients computed via backpropagation. In this post, we will learn how to create a self-normalizing deep feed-forward neural network using Keras. This will solve the gradient instability issue, speeding up training convergence, and improving model performance. Disclaimer: This article is a brief summary with focus on implementation.