Goto

Collaborating Authors

What Is Angular Momentum, Anyway?

WIRED

I've used this concept to describe everything from fidget spinners to standing double back flips to the movement of strange interstellar asteroids.


Gradient Descent Optimization Techniques.

#artificialintelligence

Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent . This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use. Gradient descent is a way to minimize an objective function J(θ) parameterized by a model's parameters by updating the parameters in the opposite direction of the gradient of the objective function .J(θ) w.r.t. to the parameters. The learning rate η determines the size of the steps we take to reach a (local) minimum.


Taming Momentum in a Distributed Asynchronous Environment

arXiv.org Machine Learning

Although distributed computing can significantly reduce the training time of deep neural networks, scaling the training process while maintaining high efficiency and final accuracy is challenging. Distributed asynchronous training enjoys near-linear speedup, but asynchrony causes gradient staleness, the main difficulty in scaling stochastic gradient descent to large clusters. Momentum, which is often used to accelerate convergence and escape local minima, exacerbates the gradient staleness, thereby hindering convergence. We propose DANA: a novel asynchronous distributed technique which is based on a new gradient staleness measure that we call the gap. By minimizing the gap, DANA mitigates the gradient staleness, despite using momentum, and therefore scales to large clusters while maintaining high final accuracy and fast convergence. DANA adapts Nesterov's Accelerated Gradient to a distributed setting, computing the gradient on an estimated future position of the model's parameters. In turn, we show that DANA's estimation of the future position amplifies the use of a Taylor expansion, which relies on a fast Hessian approximation, making it much more effective and accurate. Our evaluation on the CIFAR and ImageNet datasets shows that DANA outperforms existing methods, in both final accuracy and convergence speed.


Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent

arXiv.org Machine Learning

We present a unifying framework for adapting the update direction in gradient-based iterative optimization methods. As natural special cases we re-derive classical momentum and Nesterov's accelerated gradient method, lending a new intuitive interpretation to the latter algorithm. We show that a new algorithm, which we term Regularised Gradient Descent, can converge more quickly than either Nesterov's algorithm or the classical momentum algorithm.


Cryptocurrency Market Update: Waiting For Momentum On Sunday NewsBTC

#artificialintelligence

There has been a very minor pullback in crypto markets today. In general though things have been pretty static over the weekend and total capitalization has remained over $200 billion for another day. Bitcoin has lost a fraction since yesterday and is currently trading at $6,500. The critical level is still the $6,600 resistance point for BTC though bullish momentum appears to have waned over the weekend. Ethereum has made another percent gain to lift it to $218 at the time of writing.