Goto

Collaborating Authors

Explainable Machine Learning in Deployment

arXiv.org Artificial Intelligence

Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations with current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability, including a focus on normative desiderata.


Preparing For AI Ethics And Explainability In 2020

#artificialintelligence

How do we balance the potential benefits of deep learning with the need for explainability? People distrust artificial intelligence and in some ways this makes sense. With the desire to create the best performing AI models, many organizations have prioritized complexity over the concepts of explainability and trust. As the world becomes more dependent on algorithms for making a wide range of decisions, technologies and business leaders will be tasked with explaining how a model selected its outcome. Transparency is an essential requirement for generating trust and AI adoption.


Introducing AI Explainability 360 IBM Research Blog

#artificialintelligence

The toolkit has been engineered with a common interface for all of the different ways of explaining (not an easy feat) and is extensible to accelerate innovation by the community advancing AI explainability. We are open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings [1] through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018, to support the development of holistic trustworthy machine learning pipelines. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, we encourage contributions of other algorithms from the broader research community.


One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

#artificialintelligence

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (http://aix360.mybluemix.net/), Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline.


Questioning the AI: Informing Design Practices for Explainable AI User Experiences

arXiv.org Artificial Intelligence

A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.