Goto

Collaborating Authors

Results


Explaining artificial intelligence in human-centred terms – Martin Schüßler

#artificialintelligence

Since AI involves interactions between machines and humans--rather than just the former replacing the latter--'explainable AI' is a new challenge. Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications--from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others. Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations.


The case for self-explainable AI

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9 percent of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can't be explained can't be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks.


Where explainable AI will be crucial in industry - TechHQ

#artificialintelligence

As artificial intelligence (AI) matures and new applications boom amid a transition to Industry 4.0, we are beginning to accept that machines can help us make decisions more effectively and efficiently. But, at present, we don't always have a clear insight into how or why a model made those decisions – this is'blackbox AI'. In light of alleged bias in AI models in applications across recruitment, loan decisions, and healthcare applications, the ability to effectively explain the workings of decisions made by AI model has become imperative for the technology's further development and adoption. In December last year, the UK's Information Commissioner's Office (ICO) began moving to ensure businesses and other organizations are required to explain decisions made by AI by law, or face multimillion-dollar fines if unable. Explainable AI is the concept of being able to describe the procedures, services, and outcomes delivered or assisted by AI when that information is required, such as in the case of accusations of bias.


Causability and Explainability of Artificial Intelligence in Medicine - PubMed

#artificialintelligence

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. We argue that there is a need to go beyond explainable AI.


Causability and explainability of artificial intelligence in medicine. - PubMed - NCBI

#artificialintelligence

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. We argue that there is a need to go beyond explainable AI.


Allen School News » Seeing the forest for the trees: UW team advances explainable AI for popular machine learning models used to predict human disease and mortality risks

#artificialintelligence

Tree-based machine learning models are among the most popular non-linear predictive learning models in use today, with applications in a variety of domains such as medicine, finance, advertising, supply chain management, and more. These models are often described as a "black box" -- while their predictions are based on user inputs, how the models arrived at their predictions using those inputs is shrouded in mystery. This is problematic for some use cases, such as medicine, where the patterns and individual variability a model might uncover among various factors can be as important as the prediction itself. Now, thanks to researchers in the Allen School's Laboratory of Artificial Intelligence for Medicine and Science (AIMS Lab) and UW Medicine, the path from inputs to predicted outcome has become a lot less dense. In a paper published today in the journal Nature Machine Intelligence, the team presents TreeExplainer, a novel set of tools rooted in game theory that enables exact computation of optimal local explanations for tree-based models.


From unbiased MDI Feature Importance to Explainable AI for Trees

arXiv.org Machine Learning

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed explainable AI for trees algorithms.


Answering the Question Why: Explainable AI

#artificialintelligence

The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI? Although the ability to explain the results of Machine Learning models--and produce consistent results from them--has never been easy, a number of emergent techniques have recently appeared to open the proverbial'black box' rendering these models so difficult to explain. One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they're related and how frequently they take place together. When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.


Directions for Explainable Knowledge-Enabled Systems

arXiv.org Artificial Intelligence

Interest in the field of Explainable Artificial Intelligence has been growing for decades, and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.


Foundations of Explainable Knowledge-Enabled Systems

arXiv.org Artificial Intelligence

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.