Results


Accelerating Deep Learning Training with BigDL and Drizzle on Apache Spark - RISE Lab

#artificialintelligence

This work was done in collaboration with Ding Ding and Sergey Ermolin from Intel. In recent years, the scale of datasets and models used in deep learning has increased dramatically. Although larger datasets and models can improve the accuracy in many AI applications, they often take much longer to train on a single machine. However, it is not very common to distribute the training to large clusters using current popular deep learning frameworks, compared to what's been long around in the Big Data area, as it's often harder to gain access to a large GPU cluster and lack of convenient facilities in popular DL frameworks for distributed training. By leveraging the cluster distribution capabilities in Apache Spark, BigDL successfully performs very large-scale distributed training and inference.


Which algorithm takes the crown: Light GBM vs XGBOOST?

@machinelearnbot

If you are an active member of the Machine Learning community, you must be aware of Boosting Machines and their capabilities. The development of Boosting Machines started from ADABOOST to today's favourite XGBOOST. XGBOOST has become a de-facto algorithm for winning competitions at Analytics Vidhya and Kaggle, simply because it is extremely powerful. But given lots and lots of data, even XGBOOST takes a long time to train. Many of you might not be familiar with the Light Gradient Boosting, but you will be after reading this article.


Making your First Machine Learning Classifier in Scikit-learn (Python) Codementor

#artificialintelligence

One of the most amazing things about Python's scikit-learn library is that is has a 4-step modeling pattern that makes it easy to code a machine learning classifier. While this tutorial uses a classifier called Logistic Regression, the coding process in this tutorial applies to other classifiers in sklearn (Decision Tree, K-Nearest Neighbors etc). In this tutorial, we use Logistic Regression to predict digit labels based on images. The image above shows a bunch of training digits (observations) from the MNIST dataset whose category membership is known (labels 0–9). After training a model with logistic regression, it can be used to predict an image label (labels 0–9) given an image.


8 Proven Ways for boosting the "Accuracy" of a Machine Learning Model

@machinelearnbot

Enhancing a model performance can be challenging at times. I'm sure, a lot of you would agree with me if you've found yourself stuck in a similar situation. You try all the strategies and algorithms that you've learnt. Yet, you fail at improving the accuracy of your model. You feel helpless and stuck.


How to Train your Self-Driving Car to Steer – Towards Data Science – Medium

#artificialintelligence

Neural networks, and particularly deep learning research, have obtained many breakthroughs recently in the field of computer vision and other important fields in computer science. Deep neural networks, especially in the field of computer vision, object recognition and so on, have often a lot of parameters, millions of them. It's a quite recent model that achieved remarkable performances on object recognition tasks with very few parameters, and weighting just some megabytes. I added a recurrent layer to the output of one of the first densely connected layers of SqueezeNet: the network now takes as input 5 consecutive frames, and then the recurrent layers outputs a single real-valued number, the steering angle.


Automated Machine Learning: Deploying AutoML to the Cloud

#artificialintelligence

In this post, I share an AutoML setup to train and deploy pipelines in the cloud using Python, Flask, and two AutoML frameworks that automate feature engineering and model building. I tested and combined two open source Python tools: tsfresh, an automated feature engineering tool, and, TPOT, an automated feature preprocessing and model optimization tool. After an optimal feature engineering and model building pipeline is determined, our pipeline is persisted within our Flask application within a Python dictionary–the dictionary key being the pipeline id specified in the parameter file. I have shown how to make use of open source AutoML tools and operationalize a scalable automated feature engineering and model building pipeline to the cloud.


Alison machine learning predicts mobile ad campaign results

#artificialintelligence

YellowHead has launched Alison, a machine learning technology that predicts how mobile advertising campaigns, known as paid user acquisition, will turn out. It specializes in paid user acquisition campaigns, app store optimization, and search engine optimization. And now it has added Alison to use machine learning to predict a campaign's performance in the hopes of uncovering more insights for brands and wasting less advertising money. Top university math professors at the Data Science Research Team at Tel Aviv University and the company's developers worked on Alison, which supplements human intelligence to optimize campaigns based on predicted results across multiple ad platforms such as Facebook and Google.


Pseudo-labeling a simple semi-supervised learning method - Data, what now?

@machinelearnbot

In this post, I will show how a simple semi-supervised learning method called pseudo-labeling that can increase the performance of your favorite machine learning models by utilizing unlabeled data. First, train the model on labeled data, then use the trained model to predict labels on the unlabeled data, thus creating pseudo-labels. In competitions, such as ones found on Kaggle, the competitor receives the training set (labeled data) and test set (unlabeled data). Pseudo-labeling allows us to utilize unlabeled data while training machine learning models.


How to Set Up Distributed XGBoost on MapR-FS

#artificialintelligence

In this blog post, I'll provide step-by-step instructions for setting up XGBoost under a 3-node Hadoop cluster (Ubuntu EC2 instances). We achieved good performance by running XGBoost through a Message Passing Interface (MPI) and MapR-FS, and we recommend setting up POSIX clients for XGBoost training tasks. In addition to running XGBoost on MapR cluster nodes, I recommend that you run XGBoost on MapR POSIX clients. Also, running XGBoost on MPI won't affect the YARN resource management on MapR cluster nodes very much.


DART: Dropout Regularization in Boosting Ensembles

#artificialintelligence

The idea of DART is to build an ensemble by randomly dropping boosting tree members. The percentage of dropouts can determine the degree of regularization for boosting tree ensembles. For the comparison purpose, we first developed a boosting tree ensemble without dropouts, as shown below. As shown below, by dropping 10% tree members, ROC for the testing set can increase from 0.60 to 0.65.