Goto

Collaborating Authors

Explainable and Interpretable Models in Computer Vision and Machine Learning

#artificialintelligence

This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning. Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision.


IBM Research Launches Explainable AI Toolkit

#artificialintelligence

Explainability or interpretability of AI is a huge deal these days, especially due to the rise in the number of enterprises depending on the decisions made by machine learning and deep learning. Naturally, stakeholders want a level of transparency for how the algorithms came up with their recommendations. The so-called "black box" of AI is rapidly being questioned. For this reason, I was encouraged to learn of IBM's recent efforts in this area. The company's research arm just launched a new open-source AI toolkit, "AI Explainability 360," consisting of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.


Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

arXiv.org Machine Learning

Interpretable machine learning models have received increasing interest in recent years, especially in domains where humans are involved in the decision-making process. However, the possible loss of the task performance for gaining interpretability is often inevitable. This performance downgrade puts practitioners in a dilemma of choosing between a top-performing black-box model with no explanations and an interpretable model with unsatisfying task performance. In this work, we propose a novel framework for building a Hybrid Decision Model that integrates an interpretable model with any black-box model to introduce explanations in the decision making process while preserving or possibly improving the predictive accuracy. We propose a novel metric, explainability, to measure the percentage of data that are sent to the interpretable model for decision. We also design a principled objective function that considers predictive accuracy, model interpretability, and data explainability. Under this framework, we develop Collaborative Black-box and RUle Set Hybrid (CoBRUSH) model that combines logic rules and any black-box model into a joint decision model. An input instance is first sent to the rules for decision. If a rule is satisfied, a decision will be directly generated. Otherwise, the black-box model is activated to decide on the instance. To train a hybrid model, we design an efficient search algorithm that exploits theoretically grounded strategies to reduce computation. Experiments show that CoBRUSH models are able to achieve same or better accuracy than their black-box collaborator working alone while gaining explainability. They also have smaller model complexity than interpretable baselines.


Introducing AI Explainability 360 IBM Research Blog

#artificialintelligence

The toolkit has been engineered with a common interface for all of the different ways of explaining (not an easy feat) and is extensible to accelerate innovation by the community advancing AI explainability. We are open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings [1] through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018, to support the development of holistic trustworthy machine learning pipelines. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, we encourage contributions of other algorithms from the broader research community.


2021 Trends in Data Science: The Entire AI Spectrum - insideBIGDATA

#artificialintelligence

As an enterprise discipline, data science is the antithesis of Artificial Intelligence. The one is an unrestrained field in which creativity, innovation, and efficacy are the only limitations; the other is bound by innumerable restrictions regarding engineering, governance, regulations, and the proverbial bottom line. Nevertheless, the tangible business value praised by enterprise applications of AI is almost always spawned from data science. The ModelOps trend spearheading today's cognitive computing has a vital, distinctive correlation within the realm of data scientists. Whereas ModelOps is centered on solidifying operational consistency for all forms of AI--from its knowledge base to its statistical base--data science is the tacit force underpinning this motion by expanding the sorts of data involved in these undertakings.