Goto

Collaborating Authors

Gradient Descent


What is Gradient Descent?

#artificialintelligence

This tutorial is on the basics of gradient descent. It is also a continuation of the Intro to Machine Learning post, "What is Machine Learning?", which can be found here. Gradient descent is a method of finding the optimal weights for a model. We use the gradient descent algorithm to find the best machine learning model, with the lowest error and highest accuracy. A common explanation of gradient descent is the idea of standing on an uneven baseball field, blindfolded, and you want to find the lowest point of the field.


How to implement Linear Regression with TensorFlow

#artificialintelligence

Now, let's jump to the implementation. Firstly, we need to, obviously, import some libraries. The first thing we do inside .fit() is to concatenate an extra column of 1's to our input matrix X. This is to simplify our math and treat the bias as the weight of an extra variable that's always 1. The .fit() method will be able to learn the parameters by using either closed-form formula or stochastic gradient descent.


Gradient Descent Optimization Techniques.

#artificialintelligence

Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent . This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use. Gradient descent is a way to minimize an objective function J(θ) parameterized by a model's parameters by updating the parameters in the opposite direction of the gradient of the objective function .J(θ) w.r.t. to the parameters. The learning rate η determines the size of the steps we take to reach a (local) minimum.


Tutorial On Keras Tokenizer For Text Classification in NLP

#artificialintelligence

Now we will compile the model using optimizer as stochastic gradient descent, loss as cross-entropy and metrics to measure the performance would be accuracy. After compiling we will train the model and check the performance on validation data. We are taking a batch size of 64 and epochs to be 10.


New Ways for Optimizing Gradient Descent

#artificialintelligence

The new era of machine learning and artificial intelligence is the Deep learning era. It not only has immeasurable accuracy but also a huge hunger for data. Employing neural nets, functions with more exceeding complexity can be mapped on given data points. But there are a few very precise things which make the experience with neural networks more incredible and perceiving. Let us assume that we have trained a huge neural network.


Gradient Descent and the Normal Equation for Linear Regression with Practical Implemetation in…

#artificialintelligence

When I started my machine learning journey, math was something that always intrigued me and still does. I for one believe that libraries such as scikit learn have indeed done wonders for us when it comes to implementing the algorithms but without an understanding of the maths that goes into making the algorithm, we are bound to make mistakes on complicated problems. In this article, I will be going over the math behind Gradient Descent and the derivation behind the Normal linear Equation and then implementing them both on a dataset to get my coefficients. When i was getting started with Linear Regression and trying to get an understanding of the different ways to calculate the coefficients, The Normal Equation was by far my most favorite method to find coefficients but where does this equation come from? Well, let us take a look.


Implement a Neural Network from Scratch with NumPy

#artificialintelligence

I think that the best way to really understand how a neural network works is to implement one from scratch. That is exactly what I going to do through this article. I will create a neural network class, and I want to design it in such a way to be more flexible. I do not want to hardcode in it a specific activation or loss functions, or optimizers (that is SGD, Adam, or other gradient-based methods). I will design it to receive these from outside the class so that one can just take the class's code and pass to it whatever activation/loss/optimizer he wants.


"I have a joke…"

AIHub

Twitter users will have seen the proliferation of "I have a joke" tweets in their feed over the past few days. The AI community produced some gems so we've collected a selection here for your amusement. I have a reinforcement learning joke, but not sure it's rewarding. I have a stochastic gradient descent joke but the punchline isn't on this saddle point https://t.co/B7GM2tmz5Z I have a deep learning joke but it has a lot of layers to it.


Gradient Descent, the Learning Rate, and the importance of Feature Scaling

#artificialintelligence

The content of this post is a partial reproduction of a chapter from the book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide". What do gradient descent, the learning rate, and feature scaling have in common? Every time we train a deep learning model, or any neural network for that matter, we're using gradient descent (with backpropagation). We use it to minimize a loss by updating the parameters/weights of the model. The parameter update depends on two values: a gradient and a learning rate. The learning rate gives you control of how big (or small) the updates are going to be. A bigger learning rate means bigger updates and, hopefully, a model that learns faster.


Neural Networks Part 2: Building Neural Networks & Understanding Gradient Descent.

#artificialintelligence

From the previous article, we learnt how a single neuron or perceptron works by taking the dot product of input vectors and weights,adding bias and then applying non-linear activation function to produce output.Now let's take that information and see how these neurons build up to a neural network. Now z W0 xj*wj denotes the dot product of input vectors and weights and our final output y is just activation function applied on z. Now,if we want a multi output neural network(from the diagram above),we can simply add one of these perceptrons & we have two outputs with a different set of weights and inputs.Since all the inputs are densely connected to all the outputs,these layers are also called as Dense layers.To implement this layer, we can use many libraries such keras,tensorflow,pytorch,etc. Here it shows the tensorflow implementation of this 2 perceptron network where units 2 indicate we have two outputs in this layer.We can customize this layer by adding activation function,bias constraint etc. Now,let's take a step further and let's understand how a single layer neural network works where we have a single hidden layer which feeds into the output layer. We call this a hidden layer because unlike our input and output layer which we can see or observe them.Our hidden layers are not directly observable,we can probe inside the network and see them using tools such as Netron but we can't enforce it as these are learned .