Churn prediction

#artificialintelligence

Customer churn, also known as customer attrition, occurs when customers stop doing business with a company. The companies are interested in identifying segments of these customers because the price for acquiring a new customer is usually higher than retaining the old one. For example, if Netflix knew a segment of customers who were at risk of churning they could proactively engage them with special offers instead of simply losing them. In this post, we will create a simple customer churn prediction model using Telco Customer Churn dataset. We chose a decision tree to model churned customers, pandas for data crunching and matplotlib for visualizations.


The Hitchhiker's Guide to Feature Extraction

#artificialintelligence

Good Features are the backbone of any machine learning model. And good feature creation often needs domain knowledge, creativity, and lots of time. And some other ideas to think about feature creation. TLDR; this post is about useful feature engineering methods and tricks that I have learned and end up using often. Have you read about featuretools yet? If not, then you are going to be delighted.


The Hitchhiker's Guide to Feature Extraction

#artificialintelligence

Good Features are the backbone of any machine learning model. And good feature creation often needs domain knowledge, creativity, and lots of time. TLDR; this post is about useful feature engineering methods and tricks that I have learned and end up using often. Have you read about featuretools yet? If not, then you are going to be delighted.


Encoding Variables: Translating Your Data so the Computer Understands It

#artificialintelligence

Humans and computers don't understand data in the same way, and an active area of research in AI is determining how AI "thinks" about data. For example, the recent Quanta article Where We See Shapes, AI Sees Textures discusses an inherent disconnect between how humans and computer vision AI interpret images. The article addresses the implicit assumption many people have that when AI works with an image, it interprets the contents of the image the same way people do- by identifying the shapes of the objects. However, because most AI interprets images at a pixel level, it is more intuitive for the AI to label images by texture (i.e., more pixels in an image represent an object's texture than an object's outline or border) than by shape. Another useful example of this is in language.


Opening Black Boxes: How to leverage Explainable Machine Learning

#artificialintelligence

Let's see what the result would be if we were to calculate the Shapley values for a single row: Shapley values for a single data point. This plot shows a base value that is used to indicate the direction of the prediction. Seeing as most of the targets are 0, it isn't strange to see that the base value is negative.