Goto

Collaborating Authors

Interpretability: Cracking open the black box – Part II

#artificialintelligence

In the last post in the series, we defined what interpretability is and looked at a few interpretable models and the quirks and'gotchas' in it. Now let's dig deeper into the post-hoc interpretation techniques which is useful when you model itself is not transparent. This resonates with most real world use cases, because whether we like it or not, we get better performance with a black box model. For this exercise, I have chosen the Adult dataset a.k.a Census Income dataset. Census Income is a pretty popular dataset which has demographic information like age, occupation, along with a column which tells us if the income of the particular person 50k or not. We are using this column to run a binary classification using Random Forest.


The Ultimate Scikit-Learn Machine Learning Cheatsheet - KDnuggets

#artificialintelligence

All images were created by the author unless explicitly stated otherwise. Train-test-split is an important part of testing how well a model performs by training it on designated training data and testing it on designated testing data. This way, the model's ability to generalize to new data can be measured. In sklearn, both lists, pandas DataFrames, or NumPy arrays are accepted in X and y parameters. Training a standard supervised learning model takes the form of an import, the creation of an instance, and the fitting of the model.



An overview of model explainability in modern machine learning

#artificialintelligence

Model explainability is one of the most important problems in machine learning today. It's often the case that certain "black box" models such as deep neural networks are deployed to production and are running critical systems from everything in your workplace security cameras to your smartphone. It's a scary thought that not even the developers of these algorithms understand why exactly the algorithms make the decisions they do -- or even worse, how to prevent an adversary from exploiting them. While there are many challenges facing the designer of a "black box" algorithm, it's not completely hopeless. There are actually many different ways to illuminate the decisions a model makes.


An overview of model explainability in modern machine learning

#artificialintelligence

Model explainability is one of the most important problems in machine learning today. It's often the case that certain "black box" models such as deep neural networks are deployed to production and are running critical systems from everything in your workplace security cameras to your smartphone. It's a scary thought that not even the developers of these algorithms understand why exactly the algorithms make the decisions they do -- or even worse, how to prevent an adversary from exploiting them. While there are many challenges facing the designer of a "black box" algorithm, it's not completely hopeless. There are actually many different ways to illuminate the decisions a model makes.