Goto

Collaborating Authors

loss function


The Most Complete Guide to PyTorch for Data Scientists - KDnuggets

#artificialintelligence

PyTorch has sort of became one of the de facto standards for creating Neural Networks now, and I love its interface. Yet, it is somehow a little difficult for beginners to get a hold of. I remember picking PyTorch up only after some extensive experimentation a couple of years back. To tell you the truth, it took me a lot of time to pick it up but am I glad that I moved from Keras to PyTorch. With its high customizability and pythonic syntax, PyTorch is just a joy to work with, and I would recommend it to anyone who wants to do some heavy lifting with Deep Learning.


Exploring different optimization algorithms

#artificialintelligence

Machine learning is a field of study in the broad spectrum of artificial intelligence (AI) that can make predictions using data without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as recommendation engines, computer vision, spam filtering and so much more. They perform extraordinary well where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data-- over and over, faster and faster -- is a recent development. One of the most overwhelmingly represented machine learning techniques is a neural network.


Deep Learning With Weighted Cross Entropy Loss On Imbalanced Tabular Data Using FastAI

#artificialintelligence

The dataset comes from the context of ad conversions where the binary target variables 1 and 0 correspond to conversion success and failure. This proprietary dataset (no, I don't own the rights) has some particularly interesting attributes due to its dimensions, class imbalance and rather weak relationship between the features and the target variable. First, the dimensions of the data: this tabular dataset contains a fairly large number of records and categorical features that have a very high cardinality. Note: In FastAI, categorical features are represented using embeddings which can improve classification performance on high cardinality features. Second, the binary class labels are highly imbalanced since successful ad conversions are relatively rare.


Training our humans on the wrong dataset

#artificialintelligence

I really don't want to say that I've figured out the majority of what's wrong with modern education and how to fix it, BUT When we train (fit) any given ML model for a specific problem, on which we have a training dataset, there are several ways we go about it, but all of them involve using that dataset. Say we're training a model that takes a 2d image of some glassware and turn it into a 3d rendering. We have images of 2000 glasses from different angles and in different lighting conditions and an associated 3d model. How do we go about training the model? Well, arguable, we could start small then feed the whole dataset, we could use different sizes for test/train/validation, we could use cv to determine the overall accuracy of our method or decide it would take to long... etc But I'm fairly sure that nobody will ever say: I know, let's take a dataset of 2d images of cars and their 3d rendering and train the model on that first.


Implementing a Deep Learning Library from Scratch in Python - KDnuggets

#artificialintelligence

Deep Learning has evolved from simple neural networks to quite complex architectures in a short span of time. To support this rapid expansion, many different deep learning platforms and libraries are developed along the way. One of the primary goals for these libraries is to provide easy to use interfaces for building and training deep learning models, that would allow users to focus more on the tasks at hand. To achieve this, it may require to hide core implementation units behind several abstraction layers that make it difficult to understand basic underlying principles on which deep learning libraries are based. Hence the goal of this article is to provide insights on building blocks of deep learning library.


A Closer Look at the Generalization Gap in Large Batch Training of Neural Networks

#artificialintelligence

Deep learning architectures such as recurrent neural networks and convolutional neural networks have seen many significant improvements and have been applied in the fields of computer vision, speech recognition, natural language processing, audio recognition and more. The most commonly used optimization method for training highly complex and non-convex DNNs is stochastic gradient descent (SGD) or some variant of it. DNNs however typically have some non-convex objective functions which are a bit difficult optimize with SGD. Thus, SGD, at best, finds a local minimum of this objective function. Although the solutions of DNNs are a local minima, they have produced great end results.


The Mathematics Behind Deep Learning

#artificialintelligence

Deep neural networks (DNNs) are essentially formed by having multiple connected perceptrons, where a perceptron is a single neuron. Think of an artificial neural network (ANN) as a system which contains a set of inputs that are fed along weighted paths. These inputs are then processed, and an output is produced to perform some task. Over time, the ANN'learns', and different paths are developed. Various paths can have different weightings, and paths that are found to be more important (or produce more desirable results) are assigned higher weightings within the model than those which produce fewer desirable results.


Cost Functions In Machine Learning - The Click Reader

#artificialintelligence

The goal of a regression problem in machine learning is to find the value of a function that can accurately predict the data pattern. Similarly, a classification problem involves finding the value of the function that can accurately classify the different classes of data. The accuracy of the model is determined on the basis of how well the model predicts the output values, given the input values. Here, we will be discussing one such metric used in iteratively calibrating the accuracy of the model, known as the cost function. Before answering the question of how does the model learn, it is important to know what does the model actually learn?


Understand How Neural Networks Works

#artificialintelligence

Neural networks and various other models of how the brain works have been around since people started talking about artificial intelligence. This article introduces you to the concept of neural networks and how to implement them using Python. The figure above shows the architecture of a two-layer neural network. Note the three layers in this "two-layer" neural network: the input layer is generally excluded when you count the layers of a neural network. Looking at this diagram you can see that neurons in each layer are connected to all neurons in the next layer.


Explanation of Keras for Deep Learning in Real World Problem

#artificialintelligence

Keras is a deep learning neural network library written in Python that works on a high level. It is running on top of backend libraries like Tensorflow (or Theano, CNTK, etc.) which is capable of doing calculations on a low level, like multiplying tensors, convolutions and other operations. This library has many pros, like, it is very easy to use once you get familiar with, it allows you to build a model of neural network in a few lines of code. It is highly supported by the community, it can run on top of many backend libraries as we mentioned earlier, can be executed on more than one GPUs and so on. In this example, we are going to install Tensorflow, as it is the most used and the most popular one.