Goto

Collaborating Authors

A Glance at Optimization algorithms for Deep Learning

#artificialintelligence

Batch Gradient Descent, Mini-batch Gradient Descent and Stochastic Gradient Descent are techniques used for gradient optimization differ in the batch size they use for computing gradients in each iteration. Gradient Descent uses all the data to compute gradients and update weights in each iteration. Minibatch Gradient Descent takes a subset of dataset to update its weights in each iteration. It however takes more iterations to converge to minima, but it is faster as compared to Gradient Descent due to lesser size of batch data used. Stochastic Gradient Descent (SGD) (or also sometimes on-line gradient descent) is the extreme case of this.


A Brief (and Comprehensive) Guide to Stochastic Gradient Descent Algorithms - Giuseppe Bonaccorso

#artificialintelligence

Stochastic Gradient Descent (SGD) is a very powerful technique, currently employed to optimize all deep learning models. However, the vanilla algorithm has many limitations, in particular when the system is ill-conditioned and could never find the global minimum. In this post, we're going to analyze how it works and the most important variations that can speed up the convergence in deep models.


Understanding Linear Regression

#artificialintelligence

Linear regression is a regression model which outputs a numeric value. It is used to predict an outcome based on a linear set of input. As you can guess this function represents a linear line in the coordinate system. The hypothesis function (h0) approximates the output given input. A linear regression model can either represent a univariate or a multivariate problem.


Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent

arXiv.org Machine Learning

We study the statistical and computational complexities of the Polyak step size gradient descent algorithm under generalized smoothness and Lojasiewicz conditions of the population loss function, namely, the limit of the empirical loss function when the sample size goes to infinity, and the stability between the gradients of the empirical and population loss functions, namely, the polynomial growth on the concentration bound between the gradients of sample and population loss functions. We demonstrate that the Polyak step size gradient descent iterates reach a final statistical radius of convergence around the true parameter after logarithmic number of iterations in terms of the sample size. It is computationally cheaper than the polynomial number of iterations on the sample size of the fixed-step size gradient descent algorithm to reach the same final statistical radius when the population loss function is not locally strongly convex. Finally, we illustrate our general theory under three statistical examples: generalized linear model, mixture model, and mixed linear regression model.


My Journey into Machine Learning: Class 5 (Regression)

@machinelearnbot

In the third article, I introduced the core concept of Linear Regression. To recap, we want to have a function f that models our data. We build an approximator to the function f called g. We use an Error Function in order to measure how good our function approximates the data. The value of our error function is not that great.