Goto

Collaborating Authors

What Are Explainable AI Principles

#artificialintelligence

Explainable AI (XAI) principles are a set of guidelines for the fundamental properties that explainable AI systems should adopt. Explainable AI seeks to explain the way that AI systems work. These four principles capture a variety of disciplines that contribute to explainable AI, including computer science, engineering and psychology. The four explainable AI principles apply individually, so the presence of one does not imply that others will be present. The NIST suggests that each principle should be evaluated in its own right.


Interpretable vs Explainable Machine Learning

#artificialintelligence

From medical diagnoses to credit underwriting, machine learning models are being used to make increasingly important decisions. To trust the systems powered by these models we need to know how they make predictions. This is why the difference between an interpretable and explainable model is important. The way we understand our models and degree to which we can truly understand then depends on whether they are interpretable or explainable. Put briefly, an interpretable model can be understood by a human without any other aids/techniques.


Levels of explainable artificial intelligence for human-aligned conversational explanations

#artificialintelligence

Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making.


LEx: A Framework for Operationalising Layers of Machine Learning Explanations

arXiv.org Artificial Intelligence

Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the \textit{layers of explanation} (LEx), a lens through which we can assess the appropriateness of different types of explanations. The framework uses the notions of \textit{sensitivity} (emotional responsiveness) of features and the level of \textit{stakes} (decision's consequence) in a domain to determine whether different types of explanations are \textit{appropriate} in a given context. We demonstrate how to use the framework to assess the appropriateness of different types of explanations in different domains.


Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability

arXiv.org Artificial Intelligence

After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.