Goto

Collaborating Authors

Demystifying Black-Box Models with SHAP Value Analysis

@machinelearnbot

As an Applied Data Scientist at Civis, I implement the latest data science research to solve real-world problems. We recently worked with a global tool manufacturing company to reduce churn among their most loyal customers. A newly proposed tool, called SHAP (SHapley Additive exPlanation) values, allowed us to build a complex time-series XGBoost model capable of making highly accurate predictions for which customers were at risk, while still allowing for an individual-level interpretation of the factors that made each of these customers more or less likely to churn. To understand why this is important, we need to take a closer look at the concepts of model accuracy and interpretability. Until recently, we always had to choose between an accurate model that was hard to interpret, or a simple model that was easy to explain but sacrificed some accuracy.


Mastering Machine Learning Explainability in Python

#artificialintelligence

For data scientists, a key part of interpreting machine learning models is understanding which factors impact predictions. In order to effectively use machine learning in their decision-making processes, companies need to know which factors are most important. For example, if a company wants to predict the likelihood of customer churn, it might also want to know what exactly drives a customer to leave a company. In this example, the model might indicate that customers who purchase products that rarely go on sale are much more likely to stop purchasing. Armed with this knowledge, a company can make smarter pricing decisions in the future.


Developing a business strategy by combining machine learning with sensitivity analysis Amazon Web Services

#artificialintelligence

Machine learning (ML) is routinely used by countless businesses to assist with decision making. In most cases, however, the predictions and business decisions made by ML systems still require the intuition of human users to make judgment calls. In this post, I show how to combine ML with sensitivity analysis to develop a data-driven business strategy. This post focuses on customer churn (that is, the defection of customers to competitors), while covering problems that often arise when using ML-based analysis. These problems include difficulties with handling incomplete and unbalanced data, deriving strategic options, and quantitatively evaluating the potential impact of those options.


Developing a business strategy by combining machine learning with sensitivity analysis Amazon Web Services

#artificialintelligence

Machine learning (ML) is routinely used by countless businesses to assist with decision making. In most cases, however, the predictions and business decisions made by ML systems still require the intuition of human users to make judgment calls. In this post, I show how to combine ML with sensitivity analysis to develop a data-driven business strategy. This post focuses on customer churn (that is, the defection of customers to competitors), while covering problems that often arise when using ML-based analysis. These problems include difficulties with handling incomplete and unbalanced data, deriving strategic options, and quantitatively evaluating the potential impact of those options.


Mastering Model Explainability in Python

#artificialintelligence

For data scientists, a key part of interpreting machine learning models is understanding which factors impact predictions. In order to effectively use machine learning in their decision-making processes, companies need to know which factors are most important. For example, if a company wants to predict the likelihood of customer churn, they might also want to know what exactly drives a customer to leave a company. In this example, the model might indicate that customers who purchase products that rarely go on sale are much more likely to stop purchasing. Armed with this knowledge, a company can make smarter pricing decisions in the future.