Goto

Collaborating Authors

 gradient clipping


Understanding Gradient Clipping in Private SGD: A Geometric Perspective

Neural Information Processing Systems

Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information. To provide formal and rigorous privacy guarantee, many learning systems now incorporate differential privacy by training their models with (differentially) private SGD. A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its l2 norm exceeds a certain threshold. We first demonstrate how gradient clipping can prevent SGD from converging to a stationary point. We then provide a theoretical analysis on private SGD with gradient clipping. Our analysis fully characterizes the clipping bias on the gradient norm, which can be upper bounded by the Wasserstein distance between the gradient distribution and a geometrically symmetric distribution. Our empirical evaluation further suggests that the gradient distributions along the trajectory of private SGD indeed exhibit such symmetric structure. Together, our results provide an explanation why private SGD with gradient clipping remains effective in practice despite its potential clipping bias. Finally, we develop a new perturbation-based technique that can provably correct the clipping bias even for instances with highly asymmetric gradient distributions.


Understanding Gradient Clipping in Private SGD: A Geometric Perspective

Neural Information Processing Systems

Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information. To provide formal and rigorous privacy guarantee, many learning systems now incorporate differential privacy by training their models with (differentially) private SGD .


Understanding Gradient Clipping in Private SGD: A Geometric Perspective

Neural Information Processing Systems

Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information. To provide formal and rigorous privacy guarantee, many learning systems now incorporate differential privacy by training their models with (differentially) private SGD. A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its l2 norm exceeds a certain threshold. We first demonstrate how gradient clipping can prevent SGD from converging to a stationary point. We then provide a theoretical analysis on private SGD with gradient clipping.


A Clipped Trip: the Dynamics of SGD with Gradient Clipping in High-Dimensions

Marshall, Noah, Xiao, Ke Liang, Agarwala, Atish, Paquette, Elliot

arXiv.org Machine Learning

The success of modern machine learning is due in part to the adaptive optimization methods that have been developed to deal with the difficulties of training large models over complex datasets. One such method is gradient clipping: a practical procedure with limited theoretical underpinnings. In this work, we study clipping in a least squares problem under streaming SGD. We develop a theoretical analysis of the learning dynamics in the limit of large intrinsic dimension--a model and dataset dependent notion of dimensionality. In this limit we find a deterministic equation that describes the evolution of the loss. We show that with Gaussian noise clipping cannot improve SGD performance. Yet, in other noisy settings, clipping can provide benefits with tuning of the clipping threshold. In these cases, clipping biases updates in a way beneficial to training which cannot be recovered by SGD under any schedule. We conclude with a discussion about the links between high-dimensional clipping and neural network training.


Convergence in deep learning. In deep learning, convergence refers to…

#artificialintelligence

In deep learning, convergence refers to the point at which the training process reaches a stable state and the parameters of the network (i.e., the weights and biases) have settled on values that produce accurate predictions for the training data. A neural network can be considered to have converged when the training error (or loss) stops decreasing or has reached a minimum level of acceptable error. This is achieved by adjusting the weights and biases of the network through an optimization algorithm, typically gradient descent, which iteratively updates the parameters of the network in the direction of the negative gradient of the loss function with respect to the parameters. During the training process, the weights and biases are adjusted so that the network can produce predictions that are as close as possible to the true values for the training data. Convergence in deep learning is usually reached when the error of the network, measured by the loss function, stops decreasing significantly with respect to the number of iterations (or epochs), indicating that the network's parameters are no longer improving.


Gradient Clipping

#artificialintelligence

Gradient clipping is a technique to prevent exploding gradients in very deep networks, usually in recurrent neural networks. A neural network is a learning algorithm, also called neural network or neural net, that uses a network of functions to understand and translate data input into a specific output. This type of learning algorithm is designed based on the way neurons function in the human brain. There are many ways to compute gradient clipping, but a common one is to rescale gradients so that their norm is at most a particular value. With gradient clipping, pre-determined gradient threshold be introduced, and then gradients norms that exceed this threshold are scaled down to match the norm.