Goto

Collaborating Authors

Step-by-Step Guide to Build Interpretable Machine Learning Model -Python

#artificialintelligence

Can you interpret a deep neural network? Building a complex and dense machine learning model has the potential of reaching our desired accuracy, but does it make sense? Can you open up the black-box model and explain how it arrived at the final result? These are critical questions we need to answer as data scientists. A wide variety of businesses are relying on machine learning to drive their strategy and spruce up their bottomline. Building a model that we can explain to our clients and stakeholders is key.



Decoding the Black Box: An Important Introduction to Interpretable Machine Learning Models in…

#artificialintelligence

Can you interpret a deep neural network? Building a complex and dense machine learning model has the potential of reaching our desired accuracy, but does it make sense? Can you open up the black-box model and explain how it arrived at the final result? These are critical questions we need to answer as data scientists. A wide variety of businesses are relying on machine learning to drive their strategy and spruce up their bottomline. Building a model that we can explain to our clients and stakeholders is key.


Python Libraries for Interpretable Machine Learning

#artificialintelligence

As concerns regarding bias in artificial intelligence become more prominent it is becoming more and more important for businesses to be able to explain both the predictions their models are producing and how the models themselves work. Fortunately, there is an increasing number of python libraries being developed that attempt to solve this problem. In the following post, I am going to give a brief guide to four of the most established packages for interpreting and explaining machine learning models. The following libraries are all pip installable, come with good documentation and have an emphasis on visual interpretation. This library is essentially an extension of the scikit-learn library and provides some really useful and pretty looking visualisations for machine learning models.


Interpretable Machine Learning

#artificialintelligence

Machine Learning doesn't have to be a black box anymore. What use is a good model if we cannot explain the results to others. Interpretability is as important as creating a model. In his book'Interpretable Machine Learning', Christoph Molnar beautifully encapsulates the essence of ML interpretability through this example: Imagine you are a Data Scientist and in your free time you try to predict where your friends will go on vacation in the summer based on their facebook and twitter data you have. Now, if the predictions turn out to be accurate, your friends might be impressed and could consider you to be a magician who could see the future.