Goto

Collaborating Authors

Explainable AI: A guide for making black box machine learning models explainable

#artificialintelligence

Robots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety. But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest, based on the historical data used to train the model.


Towards Self-Explainable Cyber-Physical Systems

arXiv.org Artificial Intelligence

With the increasing complexity of CPSs, their behavior and decisions become increasingly difficult to understand and comprehend for users and other stakeholders. Our vision is to build self-explainable systems that can, at run-time, answer questions about the system's past, current, and future behavior. As hitherto no design methodology or reference framework exists for building such systems, we propose the MAB-EX framework for building self-explainable systems that leverage requirements- and explainability models at run-time. The basic idea of MAB-EX is to first Monitor and Analyze a certain behavior of a system, then Build an explanation from explanation models and convey this EXplanation in a suitable way to a stakeholder. We also take into account that new explanations can be learned, by updating the explanation models, should new and yet un-explainable behavior be detected by the system.


Interpretable vs Explainable Machine Learning

#artificialintelligence

From medical diagnoses to credit underwriting, machine learning models are being used to make increasingly important decisions. To trust the systems powered by these models we need to know how they make predictions. This is why the difference between an interpretable and explainable model is important. The way we understand our models and degree to which we can truly understand then depends on whether they are interpretable or explainable. Put briefly, an interpretable model can be understood by a human without any other aids/techniques.


Global Bigdata Conference

#artificialintelligence

How much can anyone trust a recommendation from an AI? Yaroslav Kuflinski, from Iflexion gives an explanation of explainable AI She is lying sedated on a gurney that's bumping towards the operating theater. It squeaks to a halt and a hurried member of hospital staff thrusts a form at you to sign. It describes the urgent surgical procedure your child is about to undergo--and it requires your signature if the operation is to go ahead. But here's the rub--at the top of the form in large, bold letters it says "DIAGNOSIS AND SURGICAL PLAN COPYRIGHT ACME ARTIFICIAL INTELLIGENCE COMPANY." At this specific moment, do you think you are owed a reasonable, plain-English explanation of all the inscrutable decisions that an AI has lately been making on your daughter's behalf? in short, do we need explainable AI?


High-Stakes AI Decisions Need to Be Automatically Audited

#artificialintelligence

Today's AI systems make weighty decisions regarding loans, medical diagnoses, parole, and more. They're also opaque systems, which makes them susceptible to bias. In the absence of transparency, we will never know why a 41-year-old white male and an 18-year-old black woman who commit similar crimes are assessed as "low risk" versus "high risk" by AI software. Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence and a professor in the Allen School of Computer Science at the University of Washington. Tianhui Michael Li is founder and president of Pragmatic Data, a data science and AI training company.