Explanation & Argumentation


Artificial Intelligence Breakthrough: Training and Image Recognition on Low Power CPU (with no GPU), via Explainable-AI for Smart Appliance Pilot for Bosch

#artificialintelligence

Z Advanced Computing, Inc. (ZAC), the pioneer startup on Explainable-AI (Artificial Intelligence) (XAI), is developing its Smart Home product line through a paid-pilot for Smart Appliances for BSH Home Appliances (a subsidiary of the Bosch Group, originally a joint venture between Bosch and Siemens), the largest manufacturer of home appliances in Europe and one of the largest in the world. ZAC just successfully finished its Phase 1 of the pilot program. "Our cognitive-based algorithm is more robust, resilient, consistent, and reproducible, with a higher accuracy, than Convolutional Neural Nets or GANs, which others are using now. It also requires much smaller number of training samples, compared to CNNs, which is a huge advantage," said Dr. Saied Tadayon, CTO of ZAC. "We did the entire work on a regular laptop, for both training and recognition, without any dedicated GPU. So, our computing requirement is much smaller than a typical Neural Net, which requires a dedicated GPU," continued Dr. Bijan Tadayon, CEO of ZAC.


Will XAI become the key factor to future Artificial Intelligence adoption?

#artificialintelligence

Explainable Artificial Intelligence (XAI) seems to be a hot topic nowadays. It is a topic I came across recently in a number of instances: workshops organized by the European Defense Agency (EDA), posts from technology partners such as Expert System (here) or internal discussion with SDL's Research team. The straightforward definition of XAI comes from Wikipedia: "Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implementation of the social right to explanation."


What's Next after Artificial Intelligence? Enter the World of Explainable Artificial Intelligence HostReview.com

#artificialintelligence

Artificial intelligence is one of the hottest trends across the planet. But it's not new, it has been here for quite some time and helped people and organizations do wonders. Since organizations and enterprises, these days have a lot of data, they can harness the benefits of Artificial Intelligence Business Solutions. As a result of which they gain a competitive advantage and enjoy the dominance in the market. Taking a look back, artificial intelligence was conceived back in the 1940s and there was a lot of skepticism around it.


Answering the Question Why: Explainable AI

#artificialintelligence

The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI? Although the ability to explain the results of Machine Learning models--and produce consistent results from them--has never been easy, a number of emergent techniques have recently appeared to open the proverbial'black box' rendering these models so difficult to explain. One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they're related and how frequently they take place together. When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.


Boosting Machine Learning Models with Explainable AI (XAI)

#artificialintelligence

With a typical machine learning model, the traditional correlation of feature importance analysis often has limited value. In a data scientist's toolkit, are there reliable, systematic, model agnostic methods that measure feature impact accurate to the prediction? As AI gains traction with more applications, Explainable AI (XAI) is an increasingly critical component to explain with clarity and deploy with confidence. XAI technologies are becoming more mature for both machine learning and deep learning. SHAP (SHapley Additive exPlanations) is developed by Scott Lundberg at the University of Washington.


A Study on Multimodal and Interactive Explanations for Visual Question Answering

arXiv.org Artificial Intelligence

Explainability and interpretability of AI models is an essential factor affecting the safety of AI. While various explainable AI (XAI) approaches aim at mitigating the lack of transparency in deep networks, the evidence of the effectiveness of these approaches in improving usability, trust, and understanding of AI systems are still missing. We evaluate multimodal explanations in the setting of a Visual Question Answering (VQA) task, by asking users to predict the response accuracy of a VQA agent with and without explanations. We use between-subjects and within-subjects experiments to probe explanation effectiveness in terms of improving user prediction accuracy, confidence, and reliance, among other factors. The results indicate that the explanations help improve human prediction accuracy, especially in trials when the VQA system's answer is inaccurate. Furthermore, we introduce active attention, a novel method for evaluating causal attentional effects through intervention by editing attention maps. User explanation ratings are strongly correlated with human prediction accuracy and suggest the efficacy of these explanations in human-machine AI collaboration tasks.


Algorithmic Recourse: from Counterfactual Explanations to Interventions

arXiv.org Artificial Intelligence

As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -- "how the world would have (had) to be different for a desirable outcome to occur" -- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, one of the main objectives of "explanations as a means to help a data-subject act rather than merely understand" has been overlooked. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, moving the focus from explanations to recommendations. Finally, we provide the reader with an extensive discussion on how to realistically achieve recourse beyond structural interventions.


PostDoc Researcher - Graph Representation Learning and Explainable AI ai-jobs.net

#artificialintelligence

Accenture Labs Dublin is looking for a Post-Doctoral researcher in the domain of Graph Representation Learning and Explainable AI. You will be in charge of designing interpretable machine learning models to infer knowledge from a graph of clinical, genomic, and behavioural data. Explanations will use a wide range of techniques, such as rules derived from the deep learning models, gradient-based attribution methods, or graph-based explanations based on network analysis. The length of the PostDoc is 3 years. You will join a multi-partner project whose goal is identifying factors that can cause development of new medical conditions, and worsen the quality of life of cancer survivors.


Explainable Artificial Intelligence beyond.ai

#artificialintelligence

Explainable AI cannot be implemented as an afterthought or add-on to an existing system. It must be part of the original design. Beyond Limits systems cover the full spectrum of explainability, providing high-level system alerts, plus drill-down reasoning traces with detailed evidence, probability, and risk. Explainable AI helps take the mystery out of the technology and is the first step in enabling artificial intelligence to work with people in a trusting and mutually beneficial relationship.


Cognitive Argumentation and the Suppression Task

arXiv.org Artificial Intelligence

This paper addresses the challenge of modeling human reasoning, within a new framework called Cognitive Argumentation. This framework rests on the assumption that human logical reasoning is inherently a process of dialectic argumentation and aims to develop a cognitive model for human reasoning that is computational and implementable. To give logical reasoning a human cognitive form the framework relies on cognitive principles, based on empirical and theoretical work in Cognitive Science, to suitably adapt a general and abstract framework of computational argumentation from AI. The approach of Cognitive Argumentation is evaluated with respect to Byrne's suppression task, where the aim is not only to capture the suppression effect between different groups of people but also to account for the variation of reasoning within each group. Two main cognitive principles are particularly important to capture human conditional reasoning that explain the participants' responses: (i) the interpretation of a condition within a conditional as sufficient and/or necessary and (ii) the mode of reasoning either as predictive or explanatory. We argue that Cognitive Argumentation provides a coherent and cognitively adequate model for human conditional reasoning that allows a natural distinction between definite and plausible conclusions, exhibiting the important characteristics of context-sensitive and defeasible reasoning.