Collaborating Authors

Explanation & Argumentation

Reports of the Workshops Held at the 2021 AAAI Conference on Artificial Intelligence

Interactive AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, ...

'Explainable AI' Builds Trust With Customers - Insurance Thought Leadership


Insurance is moving toward a world in which carriers will not be allowed to make decisions that affect customers based on black-box AI. Artificial intelligence (AI) holds a lot of promise for the insurance industry, particularly for reducing premium leakage, accelerating claims and making underwriting more accurate. AI can identify patterns and indicators of risk that would otherwise go unnoticed by human eyes. Unfortunately, AI has often been a black box: Data goes in, results come out and no one -- not even the creators of the AI -- has any idea how the AI came to its conclusions. That's because pure machine learning (ML) analyzes the data in an iterative fashion to develop a model, and that process is simply not available or understandable.

How Does Understanding Of AI Shape Perceptions Of XAI?


One of the biggest challenges of machine learning and artificial intelligence is their inability to explain their decision to the users. This black box in AI renders the system largely impenetrable, making it difficult for scientists and researchers to understand why a certain system is behaving the way it is. In recent years, a new branch of explainable AI (XAI) has emerged, which the researchers are actively pursuing to establish user-friendly AI. That said, how AI explanations are perceived is highly dependent on a person's background in AI. A new study named "The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations", argues that AI background influences each group's interpretations and that these differences exist through the lens of appropriation and cognitive heuristics.

Explainable AI May Surrender Confidential Data More Easily


Researchers from the National University of Singapore have concluded that the more explainable AI becomes, the easier it will become to circumvent vital privacy features in machine learning systems. They also found that even when a model is not explainable, it's possible to use explanations of similar models to'decode' sensitive data in the non-explainable model. The research, titled Exploiting Explanations for Model Inversion Attacks, highlights the risks of using the'accidental' opacity of the way neural networks function as if this was a by-design security feature – not least because a wave of new global initiatives, including the European Union's draft AI regulations, are characterizing explainable AI (XAI) as a prerequisite for the eventual normalization of machine learning in society. In the research, an actual identity is successfully reconstructed from supposedly anonymous data relating to facial expressions, through the exploitation of multiple explanations of the machine learning system. 'Explainable artificial intelligence (XAI) provides more information to help users to understand model decisions, yet this additional knowledge exposes additional risks for privacy attacks.

Even experts are too quick to rely on AI explanations, study finds


The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. As AI systems increasingly inform decision-making in health care, finance, law, and criminal justice, they need to provide justifications for their behavior that humans can understand. The field of "explainable AI" has gained momentum as regulators turn a critical eye toward black-box AI systems -- and their creators. But how a person's background can shape perceptions of AI explanations is a question that remains underexplored. A new study coauthored by researchers at Cornell University, IBM, and the Georgia Institute of Technology aims to shed light on the intersection of interpretability and explainable AI.

Can Explainable AI be Automated?


I recently fell in love with Explainable AI (XAI). XAI is a set of methods aimed at making increasingly complex machine learning (ML) models understandable by humans. XAI could help bridge the gap between AI and humans. That is very much needed as the gap is widening. Machine learning is proving incredibly successful in tackling problems from cancer diagnostics to fraud detection.

Using Counterfactual Instances for XAI


The biggest shortcoming of many machine learning models and neural networks is their "blackbox" nature. Which feature was most influential in this predicted output that we got for an instance? XAI which stands for Explainable Artificial Intelligence is the area of study that tries to tackle this blackbox issue of models.

On Quantifying Literals in Boolean Logic and Its Applications to Explainable AI Artificial Intelligence

This extends the reach of Boolean logic by enabling a variety of applications that have been explored over the decades. The existential quantification of literals (variable states) and its applications have also been studied in the literature. In this paper, we complement this by studying universal literal quantification and its applications, particularly to explainable AI. We also provide a novel semantics for quantification, discuss the interplay between variable/literal and existential/universal quantification. We further identify some classes of Boolean formulas and circuits on which quantification can be done efficiently. Literal quantification is more fine-grained than variable quantification as the latter can be defined in terms of the former. This leads to a refinement of quantified Boolean logic with literal quantification as its primitive.

Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model Artificial Intelligence

Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR) framework that unifies diagnostic model learning, visual explanation generation (explanation unit), and trained diagnostic model reinforcement (reinforcement unit) guided by the visual explanation. For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as an intended target label. For example, a counterfactual map can localize hypothetical abnormalities within a normal brain image that may cause it to be diagnosed with Alzheimer's disease (AD). We believe that the generated counterfactual maps represent data-driven and model-induced knowledge about a target task, i.e., AD diagnosis using structural MRI, which can be a vital source of information to reinforce the generalization of the trained diagnostic model. To this end, we devise an attention-based feature refinement module with the guidance of the counterfactual maps. The explanation and reinforcement units are reciprocal and can be operated iteratively. Our proposed approach was validated via qualitative and quantitative analysis on the ADNI dataset. Its comprehensibility and fidelity were demonstrated through ablation studies and comparisons with existing methods.

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey Artificial Intelligence

Broad Explainable Artificial Intelligence moves away from interpreting individual decisions based on a single datum and aims to provide integrated explanations from multiple machine learning algorithms into a coherent explanation of an agent's behaviour that is aligned to the communication needs of the explainee. Reinforcement Learning (RL) methods, we propose, provide a potential backbone for the cognitive model required for the development of Broad-XAI. RL represents a suite of approaches that have had increasing success in solving a range of sequential decision-making problems. However, these algorithms all operate as black-box problem solvers, where they obfuscate their decision-making policy through a complex array of values and functions. EXplainable RL (XRL) is relatively recent field of research that aims to develop techniques to extract concepts from the agent's: perception of the environment; intrinsic/extrinsic motivations/beliefs; Q-values, goals and objectives. This paper aims to introduce a conceptual framework, called the Causal XRL Framework (CXF), that unifies the current XRL research and uses RL as a backbone to the development of Broad-XAI. Additionally, we recognise that RL methods have the ability to incorporate a range of technologies to allow agents to adapt to their environment. CXF is designed for the incorporation of many standard RL extensions and integrated with external ontologies and communication facilities so that the agent can answer questions that explain outcomes and justify its decisions.