Basics of AI - Backpropagation (Video)

#artificialintelligence

This is a superb and very simple explanation of one of the basic concepts of AI. This should be included in most advanced math curriculum if only to demystify what AI really is. It also explains why AI can now distinguish dogs from cats or recognize individuals.) Source: End to end machine learning library.


Neural Network Gradients: Backpropagation, Dual Numbers, Finite Differences

#artificialintelligence

In the post How to Train Neural Networks With Backpropagation I said that you could also calculate the gradient of a neural network by using dual numbers or finite differences. The post I already linked to explains backpropagation. Since the fundamentals are explained in the links above, we'll go straight to the code. We'll be getting the gradient (learning values) for the network in example 4 in the backpropagation post: Note that I am using "central differences" for the gradient, but it would be more efficient to do a forward or backward difference, at the cost of some accuracy. I didn't compare the running times of each method as my code is meant to be readable, not fast, and the code isn't doing enough work to make a meaningful performance test IMO.


A Visual Explanation of the Back Propagation Algorithm for Neural Networks

#artificialintelligence

Let's assume we are really into mountain climbing, and to add a little extra challenge, we cover eyes this time so that we can't see where we are and when we accomplished our "objective," that is, reaching the top of the mountain. Since we can't see the path upfront, we let our intuition guide us: assuming that the mountain top is the "highest" point of the mountain, we think that the steepest path leads us to the top most efficiently. We approach this challenge by iteratively "feeling" around you and taking a step into the direction of the steepest ascent -- let's call it "gradient ascent." But what do we do if we reach a point where we can't ascent any further? I.e., each direction leads downwards?


Coding Neural Network Back-Propagation Using C# -- Visual Studio Magazine

#artificialintelligence

Back-Propagation is the most common algorithm for training neural networks. Here's how to implement it in C#. Back-propagation is the most common algorithm used to train neural networks. There are many ways that back-propagation can be implemented. This article presents a code implementation, using C#, which closely mirrors the terminology and explanation of back-propagation given in the Wikipedia entry on the topic.


Grad-CAM: Why did you say that?

arXiv.org Machine Learning

We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing input regions that are 'important' for predictions -- or visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses class-specific gradient information to localize important regions. These localizations are combined with existing pixel-space visualizations to create a novel high-resolution and class-discriminative visualization called Guided Grad-CAM. These methods help better understand CNN-based models, including image captioning and visual question answering (VQA) models. We evaluate our visual explanations by measuring their ability to discriminate between classes, to inspire trust in humans, and their correlation with occlusion maps. Grad-CAM provides a new way to understand CNN-based models. We have released code, an online demo hosted on CloudCV, and a full version of this extended abstract.