Human Interpretable Machine Learning (Part 1) -- The Need and Importance of Model Interpretation

#artificialintelligence

The field of Machine Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. Rather than just running lab experiments to publish a research paper, the key objective of data science and machine learning in the 21st century has changed to tackling and solving real-world problems, automating complex tasks and making our life easier and better. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years.


Explainable Artificial Intelligence (Part 2) -- Model Interpretation Strategies

#artificialintelligence

This article in a continuation in my series of articles aimed at'Explainable Artificial Intelligence (XAI)'. If you haven't checked out the first article, I would definitely recommend you to take a quick glance at'Part I -- The Importance of Human Interpretable Machine Learning' which covers the what and why of human interpretable machine learning and the need and importance of model interpretation along with its scope and criteria. In this article, we will be picking up from where we left off and expand further into the criteria of machine learning model interpretation methods and explore techniques for interpretation based on scope. The aim of this article is to give you a good understanding of existing, traditional model interpretation methods, their limitations and challenges. We will also cover the classic model accuracy vs. model interpretability trade-off and finally take a look at the major strategies for model interpretation. Briefly, we will be covering the following aspects in this article. This should get us set and ready for the detailed hands-on guide to model interpretation coming in Part 3, so stay tuned! Model interpretation at heart, is to find out ways to understand model decision making policies better.


Step-by-Step Guide to Build Interpretable Machine Learning Model -Python

#artificialintelligence

Can you interpret a deep neural network? Building a complex and dense machine learning model has the potential of reaching our desired accuracy, but does it make sense? Can you open up the black-box model and explain how it arrived at the final result? These are critical questions we need to answer as data scientists. A wide variety of businesses are relying on machine learning to drive their strategy and spruce up their bottomline. Building a model that we can explain to our clients and stakeholders is key.


Ideas on interpreting machine learning

#artificialintelligence

For more on advances in machine learning, prediction, and technology, check out the Data science and advanced analytics sessions at Strata Hadoop World London, May 22-25, 2017. You've probably heard by now that machine learning algorithms can use big data to predict whether a donor will give to a charity, whether an infant in a NICU will develop sepsis, whether a customer will respond to an ad, and on and on. Machine learning can even drive cars and predict elections. I believe it can, but these recent high-profile hiccups should leave everyone who works with data (big or not) and machine learning algorithms asking themselves some very hard questions: do I understand my data? Do I understand the model and answers my machine learning algorithm is giving me? And do I trust these answers? Unfortunately, the complexity that bestows the extraordinary predictive abilities on machine learning algorithms also makes the answers the algorithms produce hard to understand, and maybe even hard to ...


Ideas on interpreting machine learning

#artificialintelligence

For more on advances in machine learning, prediction, and technology, check out the Data science and advanced analytics sessions at Strata Hadoop World London, May 22-25, 2017. Early price ends April 7. You've probably heard by now that machine learning algorithms can use big data to predict whether a donor will give to a charity, whether an infant in a NICU will develop sepsis, whether a customer will respond to an ad, and on and on. Machine learning can even drive cars and predict elections. I believe it can, but these recent high-profile hiccups should leave everyone who works with data (big or not) and machine learning algorithms asking themselves some very hard questions: do I understand my data? Do I understand the model and answers my machine learning algorithm is giving me? And do I trust these answers? Unfortunately, the complexity that bestows the extraordinary predictive abilities on machine learning algorithms also makes the answers the algorithms produce hard to ...