Goto

Collaborating Authors

Why spectral normalization stabilizes GANs: analysis and improvements

AIHub

Figure 1: Training instability is one of the biggest challenges in training GANs. Despite the existence of successful heuristics like Spectral Normalization (SN) for improving stability, it is poorly-understood why they work. In our research, we theoretically explain why SN stabilizes GAN training. Using these insights, we further propose a better normalization technique for improving GANs' stability called Bidirectional Scaled Spectral Normalization. Generative adversarial networks (GANs) are a class of popular generative models enabling many cutting-edge applications such as photorealistic image synthesis.


Mean Spectral Normalization of Deep Neural Networks for Embedded Automation

arXiv.org Machine Learning

Deep Neural Networks (DNNs) have begun to thrive in the field of automation systems, owing to the recent advancements in standardising various aspects such as architecture, optimization techniques, and regularization. In this paper, we take a step towards a better understanding of Spectral Normalization (SN) and its potential for standardizing regularization of a wider range of Deep Learning models, following an empirical approach. We conduct several experiments to study their training dynamics, in comparison with the ubiquitous Batch Normalization (BN) and show that SN increases the gradient sparsity and controls the gradient variance. Furthermore, we show that SN suffers from a phenomenon, we call the mean-drift effect, which mitigates its performance. We, then, propose a weight reparameterization called as the Mean Spectral Normalization (MSN) to resolve the mean drift, thereby significantly improving the network's performance. Our model performs ~16% faster as compared to BN in practice, and has fewer trainable parameters. We also show the performance of our MSN for small, medium, and large CNNs - 3-layer CNN, VGG7 and DenseNet-BC, respectively - and unsupervised image generation tasks using Generative Adversarial Networks (GANs) to evaluate its applicability for a broad range of embedded automation tasks.


Depthwise Separable Convolutions Allow for Fast and Memory-Efficient Spectral Normalization

arXiv.org Machine Learning

An increasing number of models require the control of the spectral norm of convolutional layers of a neural network. While there is an abundance of methods for estimating and enforcing upper bounds on those during training, they are typically costly in either memory or time. In this work, we introduce a very simple method for spectral normalization of depthwise separable convolutions, which introduces negligible computational and memory overhead. We demonstrate the effectiveness of our method on image classification tasks using standard architectures like MobileNetV2.


CMU's Latest Machine Learning Research Analyzes and Improves Spectral Normalization In GANs

#artificialintelligence

GANs (generative adversarial networks) are cutting-edge deep generative models that are best known for producing high-resolution, photorealistic photographs. The goal of GANs is to generate random samples from a target data distribution with only a small set of training examples available. This is accomplished by learning two functions: a generator G that maps random input noise to a generated sample, and a discriminator D that attempts to categorize input samples as accurate (i.e., from the training dataset) or fake (i.e., not from the training dataset) (i.e., produced by the generator). Despite its success in enhancing the sample quality of data-driven generative models, GANs' adversarial training adds to instability. Small changes in hyperparameters, as well as randomness in the optimization process, might cause training to fail.


How to Initialize your Network? Robust Initialization for WeightNorm & ResNets

arXiv.org Machine Learning

Residual networks (ResNet) and weight normalization play an important role in various deep learning applications. However, parameter initialization strategies have not been studied previously for weight normalized networks and, in practice, initialization methods designed for un-normalized networks are used as a proxy. Similarly, initialization for ResNets have also been studied for un-normalized networks and often under simplified settings ignoring the shortcut connection. To address these issues, we propose a novel parameter initialization strategy that avoids explosion/vanishment of information across layers for weight normalized networks with and without residual connections. The proposed strategy is based on a theoretical analysis using mean field approximation. We run over 2,500 experiments and evaluate our proposal on image datasets showing that the proposed initialization outperforms existing initialization methods in terms of generalization performance, robustness to hyper-parameter values and variance between seeds, especially when networks get deeper in which case existing methods fail to even start training. Finally, we show that using our initialization in conjunction with learning rate warmup is able to reduce the gap between the performance of weight normalized and batch normalized networks.