Goto

Collaborating Authors

Permute to Train: A New Dimension to Training Deep Neural Networks

arXiv.org Machine Learning

We show that Deep Neural Networks (DNNs) can be efficiently trained by permuting neuron connections. We introduce a new family of methods to train DNNs called Permute to Train (P2T). Two implementations of P2T are presented: Stochastic Gradient Permutation and Lookahead Permutation. The former computes permutation based on gradient, and the latter depends on another optimizer to derive the permutation. We empirically show that our proposed method, despite only swapping randomly weighted connections, achieves comparable accuracy to that of Adam on MNIST, Fashion-MNIST, and CIFAR-10 datasets. It opens up possibilities for new ways to train and regularize DNNs.


The effect of the choice of neural network depth and breadth on the size of its hypothesis space

arXiv.org Machine Learning

We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is the number of neurons in the hidden layer $l$.


Low-loss connection of weight vectors: distribution-based approaches

arXiv.org Machine Learning

Recent research shows that sublevel sets of the loss surfaces of overparameterized networks are connected, exactly or approximately. We describe and compare experimentally a panel of methods used to connect two low-loss points by a low-loss curve on this surface. Our methods vary in accuracy and complexity. Most of our methods are based on "macroscopic" distributional assumptions, and some are insensitive to the detailed properties of the points to be connected. Some methods require a prior training of a "global connection model" which can then be applied to any pair of points. The accuracy of the method generally correlates with its complexity and sensitivity to the endpoint detail.


Are Saddles Good Enough for Deep Learning?

arXiv.org Machine Learning

Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as `good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.


Spurious Local Minima of Shallow ReLU Networks Conform with the Symmetry of the Target Model

arXiv.org Machine Learning

We consider the optimization problem associated with fitting two-layer ReLU networks with respect to the squared loss, where labels are assumed to be generated by a target network. Focusing first on standard Gaussian inputs, we show that the structure of spurious local minima detected by stochastic gradient descent (SGD) is, in a well-defined sense, the \emph{least loss of symmetry} with respect to the target weights. A closer look at the analysis indicates then that this principle of least symmetry breaking may apply to a broader range of settings. Motivated by this, we conduct a series of experiments which corroborate this hypothesis for different classes of non-isotropic non-product distributions, smooth activation functions and networks with a few layers.