Goto

Collaborating Authors

Explanation & Argumentation


Explainable Artificial Intelligence (XAI) with Python

#artificialintelligence

Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI This course provides detailed insights into the latest developments in Explainable Artificial Intelligence (XAI). Our reliance on artificial intelligence models is increasing day by day, and it's also becoming equally important to explain how and why AI makes a particular decision. Recent laws have also caused the urgency about explaining and defending the decisions made by AI systems. This course discusses tools and techniques using Python to visualize, explain, and build trustworthy AI systems. This course covers the working principle and mathematical modeling of LIME (Local Interpretable Model Agnostic Explanations), SHAP (SHapley Additive exPlanations) for generating local and global explanations.


DARPA's explainable AI (XAI) program: A retrospective

#artificialintelligence

Dramatic success in machine learning has created an explosion of new AI capabilities. Continued advances promise to produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous benefits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users. This issue is especially important for the United States Department of Defense (DoD), which faces challenges that require the development of more intelligent, autonomous, and reliable systems. XAI will be essential for users to understand, appropriately trust, and effectively manage this emerging generation of artificially intelligent partners.


Pandemic babies higher risk for developmental delays, but don't blame the virus, researchers say

FOX News

Dr. Henderson Lewis Jr. explains the reasoning behind a vaccine mandate for students ages 5 and up on'America Reports.' COVID-19 during pregnancy surprisingly did not increase the chance of babies' neurodevelopmental delay, although those born during the pandemic were associated with higher neurodevelopmental delays compared to those born prior to the pandemic, according to a recent JAMA Pediatrics study. Columbia University Irving Medical Center established a prospective cohort study called COVID-19 Mother Baby Outcomes (COMBO) Initiative in the spring of 2020 to study the associations between the exposure of the virus while the baby is still in the mother's womb with the well-being of the baby. The researchers studied a cohort of infants who were exposed to COVID-19 during pregnancy and compared them to a control group of similar gestational age at birth, birthday, sex, and mode of delivery who were not exposed to the virus. Whether or not kids should be required to wear masks has been a polarizing topic thorough the COVID-19 pandemic. "Infants born to mothers who have viral infections during pregnancy have a higher risk of neurodevelopmental deficits, so we thought we would find some changes in the neurodevelopment of babies whose mothers had COVID during pregnancy," said lead investigator Dr. Dani Dumitriu.


Researchers are working toward more transparent language models

#artificialintelligence

The most sophisticated AI language models, like OpenAI's GPT-3, can perform tasks from generating code to drafting marketing copy. But many of the underlying mechanisms remain opaque, making these models prone to unpredictable -- and sometimes toxic -- behavior. As recent research has shown, even careful calibration can't always prevent language models from making sexist associations or endorsing conspiracies. Newly proposed explainability techniques promise to make language models more transparent than before. While they aren't silver bullets, they could be the building blocks for less problematic models -- or at the very least models that can explain their reasoning.


Explainable AI (XAI) Methods Part 2-- Individual Conditional Expectation (ICE) Curves

#artificialintelligence

Explainable AI (XAI) Methods Part 2— Individual Conditional Expectation (ICE) Curves. Tutorial on Individual Conditional Expectation (ICE) Curves, its advantages and disadvantages, how it is different from PDP and how to make….


Explainable AI (XAI) Methods Part 1 -- Partial Dependence Plot (PDP)

#artificialintelligence

Explainable Machine Learning (XAI) refers to efforts to make sure that artificial intelligence programs are transparent in their purposes and how they work. This is understandable because a lot of SOTA (State of the Art) models are black boxes which are difficult to interpret or explain despite their top-notch predictive power and performance. For many organizations and corporations, several percentage increase in classification accuracy may not be as important as answers to questions like "how does feature A affect the outcome?" This is why XAI has been receiving more spotlight as it greatly aids decision making and performing causal inference. In the next series of posts, I will cover various XAI methodologies that are in wide use nowadays in the Data Science community.


Toward Explainable AI for Regression Models

#artificialintelligence

In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex non-linear learning models such as deep neural networks. Gaining a better understanding is especially important e.g. for safety-critical ML applications or medical diagnostics etc. While such Explainable AI (XAI) techniques have reached significant popularity for classifiers, so far little attention has been devoted to XAI for regression models (XAIR). In this review, we clarify the fundamental conceptual differences of XAI for regression and classification tasks, establish novel theoretical insights and analysis for XAIR, provide demonstrations of XAIR on genuine practical regression problems, and finally discuss the challenges remaining for the field.


Amazon.com: Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning eBook : Kamath, Uday, Liu, John: Kindle Store

#artificialintelligence

This is a wonderful book! I'm pleased that the next generation of scientists will finally be able to learn this important topic. This is the first book I've seen that has up-to-date and well-rounded coverage. Thank you to the authors!


Low Adoption Rate for Explainable AI in Financial Services Expected to Grow

#artificialintelligence

People have become very familiar with the term artificial intelligence (AI), but many of its users have only a rudimentary understanding of how it actually works. As a result, to date financial services and many other industries have yet to leverage its full capabilities. For financial services firms, adoption of explainable AI could drive adoption of AI-related technologies from the current rate of 30% to as high as 50% in the next 18 months, according to Gartner analyst and vice president Moutusi Sau, adding that lack of explainability is inhibiting financial services providers from adopting/rolling out pilots and projects in lending and from offering more products to the "underbanked" -- those who don't seek banking products or services, many because they don't think they will qualify. Moving to "explainable AI" will remove much of the mystery around AI, and, as a result will drive adoption of more AI-driven services experts agree. The Global Explainable AI (XAI) market size is estimated to grow from $3.50 billion in 2020 to $21.03 billion by 2030, according to ResearchandMarkets.