Collaborating Authors

Neural nets - learning with total gradient rather than stochastic gradients? • /r/MachineLearning


The estimate of the gradient from just a mini-batch is usually good enough to point you in the right descent direction. It doesn't make sense to do the extra computation for a marginally better estimate. Plus, the inaccuracy or noise introduced by the mini-batch approximation can act as a regularizer. Here is an interesting paper that performs statistical tests during optimization: if the gradient is not statistically significant, more samples are added to the mini-batch.

Gradient Clipping


Gradient clipping is a technique to prevent exploding gradients in very deep networks, usually in recurrent neural networks. A neural network is a learning algorithm, also called neural network or neural net, that uses a network of functions to understand and translate data input into a specific output. This type of learning algorithm is designed based on the way neurons function in the human brain. There are many ways to compute gradient clipping, but a common one is to rescale gradients so that their norm is at most a particular value. With gradient clipping, pre-determined gradient threshold be introduced, and then gradients norms that exceed this threshold are scaled down to match the norm.

Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients?

Neural Information Processing Systems

We give a rigorous analysis of the statistical behavior of gradients in a randomly initialized fully connected network N with ReLU activations. Our results show that the empirical variance of the squares of the entries in the input-output Jacobian of N is exponential in a simple architecture-dependent constant beta, given by the sum of the reciprocals of the hidden layer widths. When beta is large, the gradients computed by N at initialization vary wildly. Our approach complements the mean field theory analysis of random networks. From this point of view, we rigorously compute finite width corrections to the statistics of gradients at the edge of chaos.

Deep Learning in Agriculture: MATLAB for Plant Classification


Deep learning is used in agriculture for several tasks such as quality assessment of crop and vegetation, autonomous fruit picking, and the classification and detection of different species. We will focus on classification in this webinar where we will learn to utilise the capability of a deep learning model to automate identification of flowers. From preparing the images to training and evaluating an existing deep neural network, we will explore how efficiently MATLAB handles these assignments. Besides exploring the features of deep neural network, we will deploy the newly trained model in MATLAB to classify more flower images.