Goto

Collaborating Authors

Results


A 'Glut' of Innovation Spotted in Data Science and ML Platforms

#artificialintelligence

These are heady days in data science and machine learning (DSML) according to Gartner, which identified a "glut" of innovation occurring in the market for DSML platforms. From established companies chasing AutoML or model governance to startups focusing on MLops or explainable AI, a plethora of vendors are simultaneously moving in all directions with their products as they seek to differentiate themselves amid a very diverse audience. "The DSML market is simultaneously more vibrant and messier than ever," a gaggle of Gartner analysts led by Peter Krensky wrote in the Magic Quadrant for DSML Platforms, which was published earlier this month. "The definitions and parameters of data science and data scientists continue to evolve, and the market is dramatically different from how it was in 2014, when we published the first Magic Quadrant on it." The 2021 Magic Quadrant for DSML is heavily represented by companies to the right of the axis, which anybody who's familiar with Gartner's quadrant-based assessment method knows represents the "completeness of vision."


What Are Explainable AI Principles

#artificialintelligence

Explainable AI (XAI) principles are a set of guidelines for the fundamental properties that explainable AI systems should adopt. Explainable AI seeks to explain the way that AI systems work. These four principles capture a variety of disciplines that contribute to explainable AI, including computer science, engineering and psychology. The four explainable AI principles apply individually, so the presence of one does not imply that others will be present. The NIST suggests that each principle should be evaluated in its own right.


Temenos demystifies artificial intelligence, helping banks fight the black box effect

#artificialintelligence

The banking software company is teaming up with Canadian Western Bank (CWB) to provide its new Temenos Virtual COO solution to small and medium-sized businesses (SMBs). The product is built on top of Temenos' omnichannel digital banking platform and utilizes explainable AI (XAI) and analytics to support financial decision-making at SMBs. By aggregating banking and business data, SMBs are able to assess their current and projected financial health through the use of XAI-powered models that simulate different business scenarios. Banks could utilize XAI technology to rectify the black box problem associated with traditional AI models used in banking. While a powerful tool in terms of generating financial insights, banks should use XAI to complement their existing interactions with customers--not replace them.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively. Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.


How explainable artificial intelligence can help humans innovate

AIHub

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

arXiv.org Artificial Intelligence

Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.


VitrAI -- Applying Explainable AI in the Real World

arXiv.org Artificial Intelligence

With recent progress in the field of Explainable Artificial Intelligence (XAI) and increasing use in practice, the need for an evaluation of different XAI methods and their explanation quality in practical usage scenarios arises. For this purpose, we present VitrAI, which is a web-based service with the goal of uniformly demonstrating four different XAI algorithms in the context of three real life scenarios and evaluating their performance and comprehensibility for humans. This work reveals practical obstacles when adopting XAI methods and gives qualitative estimates on how well different approaches perform in said scenarios.


Principles of Explanation in Human-AI Systems

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users by enabling self-explanation. Finally, we present a set of empirically-grounded, user-centered design principles that may guide developers to create successful explainable systems.


Multisource AI Scorecard Table for System Evaluation

arXiv.org Artificial Intelligence

The paper describes a Multisource AI Scorecard Table (MAST) that provides the developer and user of an artificial intelligence (AI)/machine learning (ML) system with a standard checklist focused on the principles of good analysis adopted by the intelligence community (IC) to help promote the development of more understandable systems and engender trust in AI outputs. Such a scorecard enables a transparent, consistent, and meaningful understanding of AI tools applied for commercial and government use. A standard is built on compliance and agreement through policy, which requires buy-in from the stakeholders. While consistency for testing might only exist across a standard data set, the community requires discussion on verification and validation approaches which can lead to interpretability, explainability, and proper use. The paper explores how the analytic tradecraft standards outlined in Intelligence Community Directive (ICD) 203 can provide a framework for assessing the performance of an AI system supporting various operational needs. These include sourcing, uncertainty, consistency, accuracy, and visualization. Three use cases are presented as notional examples that support security for comparative analysis.