Goto

Collaborating Authors

 ai explainability 360


GitHub - Trusted-AI/AIX360: Interpretability and explainability of data and machine learning models

#artificialintelligence

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.


AI Explainability 360: Impact and Design

#artificialintelligence

This section highlights the impact of the AIX360 toolkit in the first two years since its release. It describes several different forms of impact on real problem domains and the open source community. This impact has resulted in improvements in multiple metrics: accuracy, semiconductor yield, satisfaction rate, and domain expert time. The current version of the AIX360 toolkit includes ten explainability algorithms described in Table 1 covering different ways of explaining. Explanation methods could be either local or global, where the former refers to explaining an AI model's decision for a single instance, while the latter refers to explaining a model in its entirety.


AI Explainability 360: Impact and Design

Arya, Vijay, Bellamy, Rachel K. E., Chen, Pin-Yu, Dhurandhar, Amit, Hind, Michael, Hoffman, Samuel C., Houde, Stephanie, Liao, Q. Vera, Luss, Ronny, Mojsilovic, Aleksandra, Mourad, Sami, Pedemonte, Pablo, Raghavendra, Ramya, Richards, John, Sattigeri, Prasanna, Shanmugam, Karthikeyan, Singh, Moninder, Varshney, Kush R., Wei, Dennis, Zhang, Yunfeng

arXiv.org Artificial Intelligence

As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, have different explanation needs. To address these needs, in 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics. This paper examines the impact of the toolkit with several case studies, statistics, and community feedback. The different ways in which users have experienced AI Explainability 360 have resulted in multiple types of impact and improvements in multiple metrics, highlighted by the adoption of the toolkit by the independent LF AI & Data Foundation. The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.


Global Big Data Conference

#artificialintelligence

As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy. IBM on Monday announced it's donating a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation. As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy. "Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation," IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic. Specifically, IBM is contributing the AI Fairness 360 Toolkit, the Adversarial Robustness 360 Toolbox and the AI Explainability 360 Toolkit.


Preparing For AI Ethics And Explainability In 2020

#artificialintelligence

How do we balance the potential benefits of deep learning with the need for explainability? People distrust artificial intelligence and in some ways this makes sense. With the desire to create the best performing AI models, many organizations have prioritized complexity over the concepts of explainability and trust. As the world becomes more dependent on algorithms for making a wide range of decisions, technologies and business leaders will be tasked with explaining how a model selected its outcome. Transparency is an essential requirement for generating trust and AI adoption.


AI Year in Review: Highlights of Papers from IBM Research in 2019

#artificialintelligence

January 17, 2020 Written by: John R. Smith IBM Research has a long history as a leader in the field of Artificial Intelligence (AI). IBM's pioneering work in AI dates back to the field's inception in the 1950s, when IBM developed one of the first instances of machine learning, which was applied to the game of checkers. Since then, IBM has been responsible for achieving major milestones in AI, ranging from Deep Blue – the first chess-playing computer to defeat a reigning world champion, to Watson – the first natural language question and answering system able to win at Jeopardy!, to last year's Project Debater – the first AI system that can build persuasive arguments on its own and effectively engage in debates on complex topics. IBM's leadership in AI continued in earnest in 2019, which was notable for a growing focus on critical topics such as making trustworthy AI work in practice, creating new AI engineering paradigms to scale AI for a broader use, and continuing to advance core AI capabilities in language, speech, vision, knowledge & reasoning, human-centered AI, and more. While recent years have seen incredible progress in "narrow AI," built on technologies like deep learning, IBM Research pushed its AI research in 2019 towards developing a new foundational underpinning of AI for enterprise applications by addressing important problems like learning more from less, enabling trusted AI by ensuring the fairness, explainability, adversarial robustness, and transparency of AI systems, and integrating learning and reasoning as a way to understand more in order to do more.


Global Data Science Forum - IBM Data Science Community

#artificialintelligence

I will also appreciate more future posts. While building sophisticated machine learning models is getting easier, understanding how models develop knowledge and arrive to conclusions remains a very difficult challenge. Typically, the more accurate the models the harder they are to interpret. KDNuggets posted an article highlighting the value and resources surrounding AI Explainability - what do you think? Are these tutorials useful for you?


8 Explainable AI Frameworks Driving A New Paradigm For Transparency In AI

#artificialintelligence

Due to the ambiguity in Deep Learning solutions, there has been a lot of talk about how to make explainability inclusive of an ML pipeline. Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning and enables transparency. The first picture consists of a bunch of mathematical expressions chained together that represent the way inner layers of an algorithm or a neural network functions. Whereas, the second picture also contains the working of an algorithm but the message is more lucid.


IBM Research Launches Explainable AI Toolkit

#artificialintelligence

Explainability or interpretability of AI is a huge deal these days, especially due to the rise in the number of enterprises depending on the decisions made by machine learning and deep learning. Naturally, stakeholders want a level of transparency for how the algorithms came up with their recommendations. The so-called "black box" of AI is rapidly being questioned. For this reason, I was encouraged to learn of IBM's recent efforts in this area. The company's research arm just launched a new open-source AI toolkit, "AI Explainability 360," consisting of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.


IBM Research launches explainable AI toolkit

#artificialintelligence

IBM Research today introduced AI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making. The launch follows IBM's release a year ago of AI Fairness 360 for the detection and mitigation of bias in AI models. IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview. "That's fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us -- not because of our own internal deployments of AI or products that we might have in this space, but it's fundamentally important to create these capabilities because our clients and the world will leverage them," she said.