Goto

Collaborating Authors

Weight Agnostic Neural Networks

Neural Information Processing Systems

Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance.


r/MachineLearning - [R] Google AI Blog: Exploring Weight Agnostic Neural Networks

#artificialintelligence

Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance.


Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization

arXiv.org Machine Learning

Optimization of Binarized Neural Networks (BNNs) currently relies on real-valued latent weights to accumulate small update steps. In this paper, we argue that these latent weights cannot be treated analogously to weights in real-valued networks. Instead their main role is to provide inertia during training. We interpret current methods in terms of inertia and provide novel insights into the optimization of BNNs. We subsequently introduce the first optimizer specifically designed for BNNs, Binary Optimizer (Bop), and demonstrate its performance on CIFAR-10 and ImageNet. Together, the redefinition of latent weights as inertia and the introduction of Bop enable a better understanding of BNN optimization and open up the way for further improvements in training methodologies for BNNs.


Deep Learning Best Practices (1) -- Weight Initialization

#artificialintelligence

As a beginner at deep learning, one of the things I realized is that there isn't much online documentation that covers all the deep learning tricks in one place. There are lots of small best practices, ranging from simple tricks like initializing weights, regularization to slightly complex techniques like cyclic learning rates that can make training and debugging neural nets easier and efficient. This inspired me to write this series of blogs where I will cover as many nuances as I can to make implementing deep learning simpler for you. While writing this blog, the assumption is that you have a basic idea of how neural networks are trained. An understanding of weights, biases, hidden layers, activations and activation functions will make the content clearer.


Visualizing your goal weight could help boost weight loss, study finds

FOX News

A new study suggests visualizing your goal weight can help you boost weight loss. Ever vowed to lose weight but given up after a week because you're just not motivated? We all know that the key to weight loss is to eat less and move more, but actually doing so can be impossible. And that's why experts are increasingly starting to come round to the belief that our minds are almost an equally crucial part of the weight loss puzzle. In fact, a new study has suggested that simply visualizing your goal weight could help to boost weight loss by as much as five times.