Goto

Collaborating Authors

Explanation & Argumentation


The How of Explainable AI: Post-modelling Explainability

#artificialintelligence

Currently AI models are often developed with only predictive performance in mind. Thus, the majority of the XAI literature is dedicated to explaining pre-developed models. This bias of focus along with the recent popularity of XAI research has resulted in development of numerous and diverse post-hoc explainability methods. It's challenging to understand this vast body of literature because of the numerous approaches to XAI. In order to make sense of the post-hoc explainability methods, we propose a taxonomy or a way of breaking down these methods that shows their common structure, organized around four key aspects: the target, what is to be explained about the model; the drivers, what is causing the thing you want explained; the explanation family, how the explanation information about the drivers causing the target is communicated to the user; and the estimator, the computational process of actually obtaining the explanation. For instance, the popular Local Interpretable Model-agnostic Explanations (LIME) approach provides explanation for an instance prediction of a model, the target, in terms of input features, the drivers, using importance scores, the explanation family, computed through local perturbations of the model input, the estimator.


Reason-Checking Fake News

Communications of the ACM

While deliberate misinformation and deception are by no means new societal phenomena, the recent rise of fake news5 and information silos2 has become a growing international concern, with politicians, governments and media organizations regularly lamenting the issue. A remedy to this situation, we argue, could be found in using technology to empower people's ability to critically assess the quality of information, reasoning, and argumentation through technological means. Recent empirical findings suggest "false news spreads more than the truth because humans, not robots, are more likely to spread it."10 Thus, instead of continuing to focus on ways of limiting the efficacy of bots, educating human users to better recognize fake news stories could prove more effective in mitigating the potentially devastating social impact misinformation poses. While technology certainly contributes to the distribution of fake news and similar attacks on reasonable decision-making and debate, we posit that technology--argument technology in particular--can equally be employed to counterbalance these deliberately misleading or outright false reports made to look like genuine news.


5 Reasons Why We Need Explainable Artificial Intelligence

#artificialintelligence

This might be the first time you hear about Explainable Artificial Intelligence, but it is certainly something you should have an opinion about. Explainable AI (XAI) refers to the techniques and methods to build AI applications that humans can understand "why" they make particular decisions. In other words, if we can get explanations from an AI system about its inner logic, this system is considered as an XAI system. Explainability is a new property that started to gain popularity in the AI community, and we will talk about why that happened in recent years. Let's dive into the technical roots of the problem, first.


Atish Ray on LinkedIn: Industrialized ML for Governed, Responsible and Explainable AI - Databricks

#artificialintelligence

Accenture research shows full 84% of C-suite executives believe they must leverage Artificial Intelligence (AI) to achieve their growth objectives. Yet 76% acknowledge they struggle when it comes to scaling it across the business. Having the right framework in place for "Industrializing ML" is a key component of scaling AI in the enterprise. Join us for a glimpse into the world of Industrialized ML as it comes to life at Navy Federal Credit Union using Databricks Unified Analytics Platform.


Explainable AI and Design

#artificialintelligence

The most useful and accurate AI models are also more complex, and the more complex a model is, the more challenging it is to comprehend and trust. Why did it make that prediction? AI is not infallible, and it increasingly operates in an opaque way. This severely limits the adoption of advanced AI models in critical settings. The goal of Explainable AI (XAI) is to develop techniques to help users better understand and trust AI models.


A collection of recommendable papers and articles on Explainable AI (XAI)

#artificialintelligence

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal rights or regulatory requirements--for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.


Explainable AI: Making Sense of the Black Box

#artificialintelligence

The Black Square is an iconic painting by Russian artist Kazimir Malevich. The first version was done in 1915. The Black Square continues to impress art historians even today, however it did not impress the then Soviet government and was kept in such poor conditions that it suffered significant cracking and decay. Complex machine learning algorithms can be mathematical work of art, but if these black box algorithms fail to impress and build trust with the users, They might be ignored like Malevich's black square. Dramatic success in machine learning has led to a surge of Artificial Intelligence (AI) applications.


Explainable Artificial Intelligence (XAI)

#artificialintelligence

This article was written by Dr. Matt Turek. Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems.


Assistant Professor in Explainable AI (tenure-track)

#artificialintelligence

We invite applications for a tenure-track position in computer science, focused on explainable artificial intelligence, and ability to collaborate with social sciences. DKE research lines include human-centered aspects of recommender systems, as well as a strong applied mathematics component such as dynamic game theory (differential, evolutionary, spatial and stochastic game theory). The position is supported by the large and growing Explainable and Reliable Artificial Intelligence (ERAI) group of DKE. The group consists of Associate & Assistant Professors, postdoctoral researchers, PhD candidates and master/bachelor students. The ERAI group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements.


The explainability problem - can new approaches pry open the AI black box?

#artificialintelligence

The so-called "black-box" aspect of AI, usually referred to as the explainability problem, or X(AI) for short, arose slowly over the past few years. Still, with the rapid development in AI, it is now considered a significant problem. How can you trust a model if you cannot understand how it reaches its conclusions? For commercial benefits, for ethics concerns or regulatory considerations, X)(AI) is essential if users understand, appropriately trust, and effectively manage AI results. In researching this topic, I was surprised to find almost 400 papers on the subject.