Results


Tutorial: Machine Learning

#artificialintelligence

In this tutorial, you will create your first machine learning model by analyzing the historical customer records and order logs from Haiku T-Shirts. From Dataiku DSS home page, click on the Tutorials button in the left pane, and select Tutorial: Machine Learning. In the flow, you see the steps used in the previous tutorials to create, prepare, and join the customers and orders datasets. The Confusion Matrix compares the actual values of the target variable with predicted values (hence values such as false positives, false negatives…) and some associated metrics: precision, recall, f1-score.


Automated Machine Learning: Deploying AutoML to the Cloud

#artificialintelligence

In this post, I share an AutoML setup to train and deploy pipelines in the cloud using Python, Flask, and two AutoML frameworks that automate feature engineering and model building. I tested and combined two open source Python tools: tsfresh, an automated feature engineering tool, and, TPOT, an automated feature preprocessing and model optimization tool. After an optimal feature engineering and model building pipeline is determined, our pipeline is persisted within our Flask application within a Python dictionary–the dictionary key being the pipeline id specified in the parameter file. I have shown how to make use of open source AutoML tools and operationalize a scalable automated feature engineering and model building pipeline to the cloud.


How Random Forests improve simple Regression Trees?

@machinelearnbot

The Regression Tree will simply split the height-weight space and assign a number of points to each partition. We simply estimate the desired Regression Tree on many bootstrap samples (re-sample the data many times with replacement and re-estimate the model) and make the final prediction as the average of the predictions across the trees. I broke the CPUs data into a training sample (first 150 observations) and a test sample (remaining observations) and estimated a Regression Tree and a Random Forest. If you liked this post, you can find more details on Regression Trees and Random forest in the book Elements of Statistical learning, which can be downloaded direct from the authors page here.


Using Recurrent Neural Networks to Predict Player Performance

#artificialintelligence

For data prone to noise and anomalies (most data, if we're being honest), a Long Short Term Memory network (LSTM), preserves the long term memory capabilities of the RNN, while filtering out irrelevant data points that are not part of the pattern. Mechanically speaking, the LSTM adds an extra operation to nodes on the map, the outcome of which determines whether the data point will be remembered as part of a potential pattern, used to update the weight matrix, or forgotten and cast aside as noise. For example, to train the HR network, the first input to the network is the number of homers the player hit in his first game, the second input to the network is the number the player hit in his second game and so on. With a network to train and data to train it with, we can now look at a test case where the network attempted to learn Manny Machado's performance patterns and then made some predictions.


HR Analytics: Using Machine Learning to Predict Employee Turnover

#artificialintelligence

Then we'll use the new lime package that enables breakdown of complex, black-box machine learning models into variable importance plots. We'll take a look at two cutting edge techniques: Machine Learning with h2o.automl() from the h2o package: This function takes automated machine learning to the next level by testing a number of advanced algorithms such as random forests, ensemble methods, and deep learning along with more traditional algorithms such as logistic regression. Feature Importance with the lime package: The problem with advanced machine learning algorithms such as deep learning is that it's near impossible to understand the algorithm because of its complexity. We can see a common theme with Case 3 and Case 7: Training Time, Job Role, and Over Time are among the top factors influencing attrition.


Dealing with unbalanced data in machine learning

#artificialintelligence

Here is the same model I used in my webinar example: I randomly divide the data into training and test sets (stratified by class) and perform Random Forest modeling with 10 x 10 repeated cross-validation. You implement them the same way as before, this time choosing sampling "rose"… This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Sensitivity (or recall) describes the proportion of benign cases that have been predicted correctly, while specificity describes the proportion of malignant cases that have been predicted correctly. Here, all four methods improved specificity and precision compared to the original model.


MarTech Landscape: What is predictive advertising?

#artificialintelligence

With predictive analytics powering ad tech, campaigns can target audience segments based on a huge number of behavioral signals, ads can be personalized to be more relevant in the context of the user, and bids can be optimized based on user data -- all faster and with higher success rates than humans can manually. There are several ways in which predictive advertising is being applied to traditional digital advertising tactics, including campaign optimization, media mix modeling, media buying and ad serving. Criteo Predictive Search employs machine learning to automate Google Shopping campaigns, including retargeting to "re-engage high-value users via behavioral targeting technology that programmatically sets bids based on each user's propensity to make a purchase." Having immediate access to models and behavioral data enables the system to identify relevant audiences and make real-time bidding decisions based on a user's predicted interest in a particular product or service.


Using a Customized Cost Function to Deal With Unbalanced Data - DZone AI

#artificialintelligence

So we'll apply it to build a model that depends on a cost function and check whether it performed better than the models built from raw (or automatically balanced) data. A batch prediction receives a model ID and a test dataset ID and runs all the instances of the test dataset through the model. Sometimes your predictions will be right when they predict the positive class ( TP true positives) and sometimes otherwise ( TN true negatives). There are two possibilities for the predictions to be wrong: instances that are predicted to be of the positive class and are not ( FP false positives), and instances of the positive class whose prediction fails ( FN false negatives).


Using a Customized Cost Function to deal with Unbalanced Data

#artificialintelligence

So we'll apply it to build a model that depends on a cost function and check whether it performed better than the models built from raw (or automatically balanced) data. A batch prediction receives a model ID and a test dataset ID and runs all the instances of the test dataset through the model. Sometimes your predictions will be right when they predict the positive class (TP true positives) and sometimes otherwise (TN true negatives). There are two possibilities for the predictions to be wrong: instances that are predicted to be of the positive class and are not (FP false positives), and instances of the positive class whose prediction fails (FN false negatives).


Machine Learning using Spark and R - Dataconomy

#artificialintelligence

In Spark 1.x there was no support for accessing the Spark ML (machine learning) libraries from R. The performance of R code on Spark was also considerably worse than could be achieved using, say, Scala. In addition, with Spark 2.1, we now have access to much of Spark's machine learning algorithms from SparkR. We're going to look at using machine learning to predict wine quality based on various characteristics of the wine. However, we'll just convert our small wine data frame to a distributed data frame.