Goto

Collaborating Authors

Results


Explainable Al (XAI) with Python

#artificialintelligence

Importance of XAI in modern world Differentiation of glass box, white box and black box ML models Categorization of XAI on the basis of their scope, agnosticity, data types and explanation techniques Trade-off between accuracy and interpretability Application of InterpretML package from Microsoft to generate explanations of ML models Need of counterfactual and contrastive explanations Working principles and mathematical modeling of XAI techniques like LIME, SHAP, DiCE, LRP, counterfactual and contrastive explanationss Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. Application of XAI techniques like LIME, SHAP, DiCE, LRP to generate explanations for black-box models for tabular, textual, and image datasets. This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems.


Data Science Books You Should Start Reading in 2021

#artificialintelligence

Aside from the real fact that Data Science is one of the highest-paid, hottest and most popular fields today, it's also somehow worth noting that it will certainly remain kind of innovative and also challenging for another decade or more as well. Data science is unquestionably one of the most in-demand professions right now. Data Science job openings abound in the global market, with enticing compensation packages from reputable employers. Companies are hiring data scientists across the board (many of which have data science departments). For ambitious data scientists all across the world, prestigious educational institutes are offering exclusive curriculum, including online diploma courses.


A Practical Tutorial on Explainable AI Techniques

arXiv.org Artificial Intelligence

Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing machine learning models with explainability. The reason is that EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This tutorial is meant to be the go-to handbook for any audience with a computer science background aiming at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box. We believe that these methods provide a valuable contribution for applying XAI techniques in their particular day-to-day models, datasets and use-cases. Figure \ref{fig:Flowchart} acts as a flowchart/map for the reader and should help him to find the ideal method to use according to his type of data. The reader will find a description of the proposed method as well as an example of use and a Python notebook that he can easily modify as he pleases in order to apply it to his own case of application.


Reports of the Workshops Held at the 2021 AAAI Conference on Artificial Intelligence

Interactive AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, ...


7 Free Resources To Learn Explainable AI

#artificialintelligence

Explainable AI (XAI) is key to establishing trust among users and fighting the black-box nature of machine learning models. In general, XAI enhances accountability and reliability in machine learning models. For a long time, tech giants like Google, IBM and others have poured resources on explainable AI to explain the decision-making process of such models. Below are the top free resources to understand Explainable AI (XAI) in detail. About: Explainable Machine Learning with LIME and H2O in R is a hands-on, guided introduction to explainable machine learning.


A Study of Automatic Metrics for the Evaluation of Natural Language Explanations

arXiv.org Artificial Intelligence

As transparency becomes key for robotics and AI, it will be necessary to evaluate the methods through which transparency is provided, including automatically generated natural language (NL) explanations. Here, we explore parallels between the generation of such explanations and the much-studied field of evaluation of Natural Language Generation (NLG). Specifically, we investigate which of the NLG evaluation measures map well to explanations. We present the ExBAN corpus: a crowd-sourced corpus of NL explanations for Bayesian Networks. We run correlations comparing human subjective ratings with NLG automatic measures. We find that embedding-based automatic NLG evaluation methods, such as BERTScore and BLEURT, have a higher correlation with human ratings, compared to word-overlap metrics, such as BLEU and ROUGE. This work has implications for Explainable AI and transparent robotic and autonomous systems.


Explainable Goal-Driven Agents and Robots -- A Comprehensive Review

arXiv.org Artificial Intelligence

Recent applications of autonomous agents and robots, such as self-driving cars, scenario-based trainers, exploration robots, and service robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are black boxes, which renders their decisions or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are still missing. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents perceptual functions (example, senses, and vision) and cognitive reasoning (example, beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.


An Explanation for eXplainable AI

#artificialintelligence

Artificial intelligence (AI) has been integrated into every part of our lives. A chatbot, enabled by advanced Natural language processing (NLP), pops to assist you while you surf a webpage. A voice recognition system can authenticate you in order to unlock your account. A drone or driverless car can service operations or access areas that are humanly impossible. Machine-learning (ML) predictions are utilized to all kinds of decision making.


US records 1,000 coronavirus deaths for fourth straight day: Live

Al Jazeera

The World Health Organization (WHO) reported a record increase in global coronavirus cases, with the total rising by 284,196 in the past 24 hours. Some 15.7 million people around the world have been diagnosed with COVID-19, while more than 638,000 have died, according to a tally by the Johns Hopkins University. More than 8.98 million people have recovered. France advised its citizens not to travel to the Spanish region of Catalonia in order to help contain the spread of COVID-19. India reported more than 49,000 fresh cases of the coronavirus with 740 new deaths, marking the biggest daily surge in infections.


Studying the Transfer of Biases from Programmers to Programs

arXiv.org Artificial Intelligence

It is generally agreed that one origin of machine bias is resulting from characteristics within the dataset on which the algorithms are trained, i.e., the data does not warrant a generalized inference. We, however, hypothesize that a different `mechanism', hitherto not articulated in the literature, may also be responsible for machine's bias, namely that biases may originate from (i) the programmers' cultural background, such as education or line of work, or (ii) the contextual programming environment, such as software requirements or developer tools. Combining an experimental and comparative design, we studied the effects of cultural metaphors and contextual metaphors, and tested whether each of these would `transfer' from the programmer to program, thus constituting a machine bias. The results show (i) that cultural metaphors influence the programmer's choices and (ii) that `induced' contextual metaphors can be used to moderate or exacerbate the effects of the cultural metaphors. This supports our hypothesis that biases in automated systems do not always originate from within the machine's training data. Instead, machines may also `replicate' and `reproduce' biases from the programmers' cultural background by the transfer of cultural metaphors into the programming process. Implications for academia and professional practice range from the micro programming-level to the macro national-regulations or educational level, and span across all societal domains where software-based systems are operating such as the popular AI-based automated decision support systems.