Goto

Collaborating Authors

Interpretable vs Explainable Machine Learning

#artificialintelligence

From medical diagnoses to credit underwriting, machine learning models are being used to make increasingly important decisions. To trust the systems powered by these models we need to know how they make predictions. This is why the difference between an interpretable and explainable model is important. The way we understand our models and degree to which we can truly understand then depends on whether they are interpretable or explainable. Put briefly, an interpretable model can be understood by a human without any other aids/techniques.


Explainable Al (XAI) with Python

#artificialintelligence

Importance of XAI in modern world Differentiation of glass box, white box and black box ML models Categorization of XAI on the basis of their scope, agnosticity, data types and explanation techniques Trade-off between accuracy and interpretability Application of InterpretML package from Microsoft to generate explanations of ML models Need of counterfactual and contrastive explanations Working principles and mathematical modeling of XAI techniques like LIME, SHAP, DiCE, LRP, counterfactual and contrastive explanationss Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems.


Explaining explainable AI

ZDNet

In 2020, one message in the artificial intelligence (AI) market came through loud and clear: AI's got some explaining to do! Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today? AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues' worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI.


A Gentle Introduction to Explainable Artificial Intelligence(XAI)

#artificialintelligence

Before diving deep into the heavy explainable AI (artificial intelligence) concepts let us look at Rohan's story and understand "WHAT IS EXPLAINABLE AI?" & "WHY IS IT NEEDED?" Rohan was a machine learning engineer at a leading company and was very sick and had symptoms of lung cancer. He went to his doctor and discussed the issue and with him. The concerned doctor asked him to get some tests done and said "I can only come to a conclusion after that". Rohan got his tests done and showed the reports to the doctor. The doctor was certain of the diagnosis but still wanted to know more about his condition.


Explainable Artificial Intelligence (XAI) with Python

#artificialintelligence

Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems. This course discusses tools and techniques using Python to visualize, explain, and build trustworthy AI systems. This course covers the working principle and mathematical modeling of LIME (Local Interpretable Model Agnostic Explanations), SHAP (SHapley Additive exPlanations) for generating local and global explanations.