Goto

Collaborating Authors

SHAP: Explain Any Machine Learning Model in Python

#artificialintelligence

This article is part of a series where we walk step by step in solving fintech problems with Machine Learning using "All lending club loan data". In previous articles, we prepared a dataset and built a Logistic Regression model, and we discussed the most common "ML model evaluation metrics" for a classification problem in the fintech space. This article will try to "understand" how our model decision works and what packages can help us to answer this question. Machine learning models are frequently named "black boxes". They produce highly accurate predictions.


When -- and Why -- You Should Explain How Your AI Works

#artificialintelligence

"With the amount of data today, we know there is no way we as human beings can process it all…The only technique we know that can harvest insight from the data, is artificial intelligence," IBM CEO Arvind Krishna recently told the Wall Street Journal. The insights to which Krishna is referring are patterns in the data that can help companies make predictions, whether that's the likelihood of someone defaulting on a mortgage, the probability of developing diabetes within the next two years, or whether a job candidate is a good fit. More specifically, AI identifies mathematical patterns found in thousands of variables and the relations among those variables. These patterns can be so complex that they can defy human understanding. This can create a problem: While we understand the variables we put into the AI (mortgage applications, medical histories, resumes) and understand the outputs (approved for the loan, has diabetes, worthy of an interview), we might not understand what's going on between the inputs and the outputs.


Machine Learning Explainability for External Stakeholders

arXiv.org Artificial Intelligence

As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.


Explainable AI: Why should business leaders care?

#artificialintelligence

Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.


Explainable AI: Why should business leaders care?

#artificialintelligence

Artificial intelligence (AI) has become increasingly pervasive and is experiencing widespread adoption in all industries. Faced with increasing competitive pressures and observing the AI success stories of their peers, more and more organizations are adopting AI in various facets of their business. Machine Learning (ML) models, the key component driving the AI systems, are becoming increasingly powerful, displaying superhuman capabilities on most tasks. However, this increased performance has been accompanied by an increase in model complexity, turning the AI systems into a black box whose decisions can be hard to understand by humans. Employing black box models can have severe ramifications, as the decisions made by the systems not only influence the business outcomes but can also impact many lives.