Goto

Collaborating Authors

Bringing Transparency Into AI

#artificialintelligence

Companies are increasingly using machine learning models to make decisions, such as the allocation of jobs, loans, or university admissions, that directly or indirectly affect people's lives. Algorithms are also used to recommend a movie to watch, a person to date, or an apartment to rent. When talking to business customers – the operators of machine learning (ML) – I hear growing demand to understand how these models and algorithms work, especially when there is an expanding number of machine learning cases without humans in the loop. Imagine an ML model is recommending the top 10 candidates from 100 applicants for a job post. Before trusting the model's recommendation, the recruiter wants to check the results.


A Brief History of Machine Learning Models Explainability

#artificialintelligence

If software ate the world, models will run it. But are we ready to be controlled by blackbox intelligent softwares? We, as human, need to understand how AI works -- especially when it drives our behaviours or businesses. That's why in a previous post, we spotted machine learning transparency as one of the hottest AI trends. Let us walk through a brief history of machine learning models explainability -- illustrated by real examples from our AI Claim Management solution for insurers.


AWS Adds Explainability to SageMaker

#artificialintelligence

Amazon Web Services is adding an automated machine learning tool to its SageMaker machine learning model builder that improves model accuracy via explainable AI. The new SageMaker feature dubbed Autopilot generates a model explainability report via SageMaker Clarify, the Amazon tool used to detect algorithmic bias while increasing the transparency of machine learning models. The reports would help model developers understand how individual attributes of training data contribute to a predicted result. The combination is promoted as helping to identify and limit algorithmic bias and explain predictions, allowing users to make informed decisions based on how models arrived at conclusions, AWS said this week. The reports also include "feature importance values" that allow developers to understand as a percentage the correlation between a training data attribute and how it contributed to a predicted result.


What You Get When You Get Zest Explainability

#artificialintelligence

Real explainability is essential during model development when it's time to figure out which variables are most influencing the prediction and how those variables are influencing the prediction. The chart below was generated by our ZAML Explain software and shows how an applicant's traditional credit score, in combination with other pieces of information, affects that applicant's model score (a higher score means a lower likelihood of default). Each of the dots in the chart represent a single credit applicant. The bottom axis is the traditional credit score (from below 500 to over 700). The vertical axis measures the impact that credit score has on an applicant's model score.


Four Approaches to Explaining AI and Machine Learning

#artificialintelligence

Advanced machine learning (ML) is a subset of AI that uses more data and sophisticated math to make better predictions and decisions. Banks and lenders could make a lot more money using ML on top of legacy credit scoring techniques to find better borrowers and reject more bad ones. But adoption of ML has been held back by the technology's "black-box" nature. ML models are exceedingly complex. You can't run a credit model safely or accurately if you can't explain its decisions.