Goto

Collaborating Authors

Practical Guide to implementing Neural Networks in Python (using Theano)

@machinelearnbot

In my last article, I discussed the fundamentals of deep learning, where I explained the basic working of a artificial neural network. If you've been following this series, today we'll become familiar with practical process of implementing neural network in Python (using Theano package).


Practical Guide to implementing Neural Networks in Python (using Theano)

#artificialintelligence

In my last article, I discussed the fundamentals of deep learning, where I explained the basic working of a artificial neural network. If you've been following this series, today we'll become familiar with practical process of implementing neural network in Python (using Theano package). I found various other packages also such as Caffe, Torch, TensorFlow etc to do this job. But, Theano is no less than and satisfactorily execute all the tasks. Also, it has multiple benefits which further enhances the coding experience in Python.


CS231n Convolutional Neural Networks for Visual Recognition

#artificialintelligence

It is possible to introduce neural networks without appealing to brain analogies. In the section on linear classification we computed scores for different visual categories given the image using the formula \( s W x \), where \(W\) was a matrix and \(x\) was an input column vector containing all pixel data of the image. In the case of CIFAR-10, \(x\) is a [3072x1] column vector, and \(W\) is a [10x3072] matrix, so that the output scores is a vector of 10 class scores. The function \(max(0,-) \) is a non-linearity that is applied elementwise. There are several choices we could make for the non-linearity (which we'll study below), but this one is a common choice and simply thresholds all activations that are below zero to zero. Notice that the non-linearity is critical computationally - if we left it out, the two matrices could be collapsed to a single matrix, and therefore the predicted class scores would again be a linear function of the input.


Deep Learning Lesson 1: A Single Neuron

#artificialintelligence

Welcome to the first lesson in our Practicing Deep Learning Series. Thoughtly is writing a multi-part tutorial series focused on understanding the foundations of Deep Learning, specifically as they apply to Natural Language Processing. This series, like our previous series, is targeted towards practitioners of machine learning. Now we are looking to provide information for developers who wish to cultivate a working familiarity with neural networks (NN) and deep learning (DL). Our goal is to help ML students, amateurs and professionals move from an awareness of neural networks to a working familiarity with the tools and workflows necessary to accomplish real-world tasks with a neural network.


Deep Learning Lesson 4: Multilayer Networks and Booleans

#artificialintelligence

Here we are, part four of our Practicing Deep Learning Series. Alas, we are at the point where we will start to examine multilayer neural networks! We've spent a decent amount of time building up to this point, but with good reason. It can be easy to gloss over the details of the individual neurons and we feel the risk of being too verbose outweighed the risk of being uninformative. The real power of neural networks become increasingly apparent when we start making multi-layer networks. In this post, we're going to describe the basic multi-layer network and look at examples of some of simple tasks it can solve. We're going to dive into two specific examples, but we provide code for those two – plus a few others. We'll point you to that code as we go.