Goto

Collaborating Authors

Introducing AI Explainability 360 IBM Research Blog

#artificialintelligence

The toolkit has been engineered with a common interface for all of the different ways of explaining (not an easy feat) and is extensible to accelerate innovation by the community advancing AI explainability. We are open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings [1] through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018, to support the development of holistic trustworthy machine learning pipelines. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, we encourage contributions of other algorithms from the broader research community.


Machine Learning Explainability for External Stakeholders

arXiv.org Artificial Intelligence

As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.


Explainable Artificial Intelligence

#artificialintelligence

Introduction In the era of data science, artificial intelligence is making impossible feats possible. Driverless cars, IBM Watson's question-answering system, cancer detection, electronic trading, etc. are all made possible through the advanced decision making ability of artificial intelligence. The deep layers of neural networks have a magical ability to recreate the human mind and its functionalities. When humans make decisions, they have the ability to explain their thought process behind it. They can explain the rationale; whether its driven by observation, intuition, experience or logical thinking ability.


Grilling the answers: How businesses need to show how AI decides

#artificialintelligence

Show your working: generations of mathematics students have grown up with this mantra. Getting the right answer is not enough. To get top marks, students must demonstrate how they got there. Now, machines need to do the same. As artificial intelligence (AI) is used to make decisions affecting employment, finance or justice, as opposed to which film a consumer might want to watch next, the public will insist it explains its working.


Explainable AI: Making Sense of the Black Box

#artificialintelligence

The Black Square is an iconic painting by Russian artist Kazimir Malevich. The first version was done in 1915. The Black Square continues to impress art historians even today, however it did not impress the then Soviet government and was kept in such poor conditions that it suffered significant cracking and decay. Complex machine learning algorithms can be mathematical work of art, but if these black box algorithms fail to impress and build trust with the users, They might be ignored like Malevich's black square. Dramatic success in machine learning has led to a surge of Artificial Intelligence (AI) applications.