Goto

Collaborating Authors

#012A Building a Deep Neural Network Master Data Science

#artificialintelligence

In this post we will see what are the building blocks of a Deep Neural Network. We will pick one layer, for example layer \(l \) of a deep neural network and we will focus on computatons for that layer. Calculation of the forward pass for layer \( l \) we get as we input activations from the previous layer and as the output we get activations of the current layer, layer \(l \). It is good to cache the value of \( z {[l]} \) for calculations in backwardpass. Backward pass is done as we input \(da {[l]} \) and we get the output \(da {[l-1]} \), as presented in the following graph.


CamCal 004 What does R look like? - Master Data Science

#artificialintelligence

Learn how these 3 rigid transformations: reflection, rotation and translation are applied in computer vision... verify algebraically if a transformation is rigid or not ....


#006A Fast Logistic Regression Master Data Science

#artificialintelligence

When we are programming Logistic Regression or Neural Networks we should avoid explicit \(for \) loops. It's not always possible, but when we can, we should use built-in functions or find some other ways to compute it. Vectorizing the implementation of Logistic Regression makes the code highly efficient. In this post we will see how we can use this technique to compute gradient descent without using even a single \(for \) loop. This code was non-vectorized and highly inefficent so we need to transform it.


#006A Fast Logistic Regression - Master Data Science

#artificialintelligence

When we are programming Logistic Regression or Neural Networks we should avoid explicit \(for \) loops. It's not always possible, but when we can, we should use built-in functions or find some other ways to compute it. Vectorizing the implementation of Logistic Regression makes the code highly efficient. In this post we will see how we can use this technique to compute gradient descent without using even a single \(for \) loop. This code was non-vectorized and highly inefficent so we need to transform it.


#006A Fast Logistic Regression Master Data Science

#artificialintelligence

When we are programming Logistic Regression or Neural Networks we should avoid explicit \(for \) loops. It's not always possible, but when we can, we should use built-in functions or find some other ways to compute it. Vectorizing the implementation of Logistic Regression makes the code highly efficient. In this post we will see how we can use this technique to compute gradient descent without using even a single \(for \) loop. This code was non-vectorized and highly inefficent so we need to transform it.