Goto

Collaborating Authors

Results


Explainable AI: current status and future directions

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI). XAI can explain how AI obtained a particular solution (e.g., classification or object detection) and can also answer other "wh" questions. This explainability is not possible in traditional AI. Explainability is essential for critical applications, such as defense, health care, law and order, and autonomous driving vehicles, etc, where the know-how is required for trust and transparency. A number of XAI techniques so far have been purposed for such applications. This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view. The advantages and shortcomings of these techniques have been discussed, and pointers to some future directions have also been provided.


Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review

arXiv.org Artificial Intelligence

A state-of-the-art systematic review on XAI applied to Prognostic and Health Management (PHM) of industrial asset is presented. The work attempts to provide an overview of the general trend of XAI in PHM, answers the question of accuracy versus explainability, investigates the extent of human role, explainability evaluation and uncertainty management in PHM XAI. Research articles linked to PHM XAI, in English language, from 2015 to 2021 are selected from IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library and Scopus databases using PRISMA guidelines. Data was extracted from 35 selected articles and examined using MS. Excel. Several findings were synthesized. Firstly, while the discipline is still young, the analysis indicates the growing acceptance of XAI in PHM domain. Secondly, XAI functions as a double edge sword, where it is assimilated as a tool to execute PHM tasks as well as a mean of explanation, in particular in diagnostic and anomaly detection. There is thus a need for XAI in PHM. Thirdly, the review shows that PHM XAI papers produce either good or excellent results in general, suggesting that PHM performance is unaffected by XAI. Fourthly, human role, explainability metrics and uncertainty management are areas requiring further attention by the PHM community. Adequate explainability metrics to cater for PHM need are urgently needed. Finally, most case study featured on the accepted articles are based on real, indicating that available AI and XAI approaches are equipped to solve complex real-world challenges, increasing the confidence of AI model adoption in the industry. This work is funded by the Universiti Teknologi Petronas Foundation.


An Explainable AI System for the Diagnosis of High Dimensional Biomedical Data

arXiv.org Artificial Intelligence

ABSTRACT Typical state of the art flow cytometry data samples consists of measures of more than 100.000 cells in 10 or more features. AI systems are able to diagnose such data with almost the same accuracy as human experts. However, there is one central challenge in such systems: their decisions have far-reaching consequences for the health and life of people, and therefore, the decisions of AI systems need to be understandable and justifiable by humans. In this work, we present a novel explainable AI method, called ALPODS, which is able to classify (diagnose) cases based on clusters, i.e., subpopulations, in the high-dimensional data. ALPODS is able to explain its decisions in a form that is understandable for human experts. For the identified subpopulations, fuzzy reasoning rules expressed in the typical language of domain experts are generated. A visualization method based on these rules allows human experts to understand the reasoning used by the AI system. A comparison to a selection of state of the art explainable AI systems shows that ALPODS operates efficiently on known benchmark data and also on everyday routine case data. KEYWORDS: Explainable AI, Expert System, Symbolic System, Biomedical Data 1. INTRODUCTION State of the art machine learning (ML) artificial intelligence (AI) algorithms are effectively and efficiently able to diagnose (classify) high-dimensional data sets in modern medicine, e.g., for multiparameter flow cytometry data [Hu et al., 2019; Zhao et al., 2020]. These are systems that, after a training (learning) phase using learning data, perform well on data that are not part of the training data, i.e., the test data.


Explainable AI for Natural Adversarial Images

arXiv.org Artificial Intelligence

Adversarial images highlight how vulnerable modern image classifiers are to perturbations outside of their training set. Human oversight might mitigate this weakness, but depends on humans understanding the AI well enough to predict when it is likely to make a mistake. In previous work we have found that humans tend to assume that the AI's decision process mirrors their own. Here we evaluate if methods from explainable AI can disrupt this assumption to help participants predict AI classifications for adversarial and standard images. We find that both saliency maps and examples facilitate catching AI errors, but their effects are not additive, and saliency maps are more effective than examples.


Practical Machine Learning Safety: A Survey and Primer

arXiv.org Artificial Intelligence

Among different ML models, Deep Neural Networks (DNNs) [130] are well-known and widely used for their powerful representation learning from high-dimensional data such as images, texts, and speech. However, as ML algorithms enter sensitive real-world domains with trustworthiness, safety, and fairness prerequisites, the need for corresponding techniques and metrics for high-stake domains is more noticeable than before. Hence, researchers in different fields propose guidelines for Trustworthy AI [208], Safe AI [5], and Explainable AI [155] as stepping stones for next generation Responsible AI [6, 247]. Furthermore, government reports and regulations on AI accountability [75], trustworthiness [216], and safety [31] are gradually creating mandating laws to protect citizens' data privacy, fair data processing, and upholding safety for AI-based products. The development and deployment of ML algorithms for open-world tasks come with reliability and dependability limitations rooting from model performance, robustness, and uncertainty limitations [156]. Unlike traditional code-based software, ML models have fundamental safety drawbacks, including performance limitations on their training set and run-time robustness in their operational domain.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

arXiv.org Artificial Intelligence

State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modeling the human explainee via Bayesian Teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal. We assess Bayesian Teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI's classifications will match their own, but explanations generated by Bayesian Teaching improve their ability to predict the AI's judgements by moving them away from this prior belief. Bayesian Teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.


Introducing and assessing the explainable AI (XAI)method: SIDU

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) has in recent years become a well-suited framework to generate human understandable explanations of black box models. In this paper, we present a novel XAI visual explanation algorithm denoted SIDU that can effectively localize entire object regions responsible for prediction in a full extend. We analyze its robustness and effectiveness through various computational and human subject experiments. In particular, we assess the SIDU algorithm using three different types of evaluations (Application, Human and Functionally-Grounded) to demonstrate its superior performance. The robustness of SIDU is further studied in presence of adversarial attack on black box models to better understand its performance.


Explainable AI for Interpretable Credit Scoring

arXiv.org Artificial Intelligence

With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness. Credit scoring models are decision models that help lenders decide whether or not to accept a loan application based on the model's expectation of the applicant being capable or not of repaying the financial obligations [1]. Such models are beneficial since they reduce the time needed for the loan approval process, allow loan officers to concentrate on only a percentage of the applications, lead to cost savings, reduce human subjectivity and decrease default risk [2]. There has been a lot of research on this problem, with various Machine Learning (ML) and Artificial Intelligence (AI) techniques proposed. Such techniques might be exceptional in predictive power but are also known as black-box methods since they provide no explanations behind their decisions, making humans unable to interpret them [3]. Therefore, it is highly unlikely that any financial expert is ready to trust the predictions of a model without any sort of justification [4]. With regards to credit scoring, lenders will need to understand the model's predictions to ensure that decisions are made for the correct reasons.


Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

arXiv.org Artificial Intelligence

Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear machine learning models. Shapley values are well calibrated to a user's intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated. In this paper, we propose a novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption. By employing Pearl's do-calculus, we show how these 'causal' Shapley values can be derived for general causal graphs without sacrificing any of their desirable properties. Moreover, causal Shapley values enable us to separate the contribution of direct and indirect effects. We provide a practical implementation for computing causal Shapley values based on causal chain graphs when only partial information is available and illustrate their utility on a real-world example.