Collaborating Authors

machine learning .p11 - Little maths behind Gradient Descent. [Hindi]


Hello geeks, this is the 11th video of machine learning tutorial. In this video we'll talk about a very little maths behind gradient descent method, for that you don't need to be a great mathematician, a high school student can easily understand the concept.

Implementing Gradient Decent using Numpy matrix multiplication


In the previous section, taught you about the log-loss function. There are many other error functions used for neural networks. Let me teach you another one, called the mean squared error. As the name says, this one is the mean of the squares of the differences between the predictions and the labels. In the following section I'll go over it in detail, then we'll get to implement backpropagation with it on the same student admissions dataset. And as a bonus, we'll be implementing this in a very effective way using matrix multiplication with NumPy! We want to find the weights for our neural networks.

Why Gradient Descent for Optimization?


I have a question regarding the optimization technique used for updating the weights. People generally use gradient descent for the optimization whether its SGD or adaptive. Why can't we use other techniques like Newton Raphson.

Gradient Descent for Elastic net Regression • /r/MachineLearning


I am using the from the wikipedia page to find the gradient descent. What will be gradient descent equation for this. And as for ridge regression if i am using very large data sets. Instead of calculating the inverse is there another way to calculate it.