Goto

Collaborating Authors

Explanation & Argumentation


Explainable AI or XAI: the key to overcoming the accountability challenge

#artificialintelligence

AI has become a key part of our day-to-day lives and business operations. A report from Microsoft and EY that analysed the outlook for AI in 2019 and beyond, stated that "65% of organisations in Europe expect AI to have a high or a very high impact on the core business." In the banking and financial industries alone, the potential that AI has to improve the customer experience is vast. Important decisions are already made by AI on credit risk, wealth management and even financial crime risk assessments. Other applications include robo-advisory, intelligent pricing, product recommendation, investment services and debt-collection.


Causability and explainability of artificial intelligence in medicine. - PubMed - NCBI

#artificialintelligence

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. We argue that there is a need to go beyond explainable AI.


Thomas Lukasiewicz awarded AXA Chair in Explainable Artificial Intelligence for Healthcare Professorship

Oxford Comp Sci

Thomas Lukasiewicz is the recent recipient of a prestigious professorship - the AXA Chair in Explainable Artificial Intelligence for Healthcare, which is the first AXA Chair at the University of Oxford. With the generous support of the AXA Research Fund, Professor Lukasiewicz will pursue opportunities to progress the role of AI in improving disease diagnosis, treatment, and prevention in healthcare. Healthcare is expected to benefit substantially from the recent revolutionary progress in artificial intelligence (AI), because it deals with huge amounts of data on a daily basis, such as patient information, medical histories, diagnostic results, genetic data, hospital billing, and clinical studies. This huge pool of data can train AI to detect patterns and make predictions and recommendations, substantially reducing the uncertainties that professionals face.


Inside the Black Box: 5 Methods for Explainable-AI (XAI)

#artificialintelligence

Explainable artificial intelligence (XAI) is the attempt to make the finding of results of non-linearly programmed systems transparent to avoid so-called black-box processes. The main task of XAI is to make non-linear programmed systems transparent. It offers practical methods to explain AI models, which, for example, correspond to the regulation of the General Data Protection Regulation (GDPR). The following five methods are listed, which have to make AI models more transparent and understandable. Layer-wise Relevance Propagation (LRP) is a technique that brings such explainability and scales to potentially highly complex deep neural networks.


Allen School News » Seeing the forest for the trees: UW team advances explainable AI for popular machine learning models used to predict human disease and mortality risks

#artificialintelligence

Tree-based machine learning models are among the most popular non-linear predictive learning models in use today, with applications in a variety of domains such as medicine, finance, advertising, supply chain management, and more. These models are often described as a "black box" -- while their predictions are based on user inputs, how the models arrived at their predictions using those inputs is shrouded in mystery. This is problematic for some use cases, such as medicine, where the patterns and individual variability a model might uncover among various factors can be as important as the prediction itself. Now, thanks to researchers in the Allen School's Laboratory of Artificial Intelligence for Medicine and Science (AIMS Lab) and UW Medicine, the path from inputs to predicted outcome has become a lot less dense. In a paper published today in the journal Nature Machine Intelligence, the team presents TreeExplainer, a novel set of tools rooted in game theory that enables exact computation of optimal local explanations for tree-based models.


Berlin ML Meetup: Classifying News, Image Duplicates, and Explainable AI

#artificialintelligence

Many online businesses rely on image galleries to deliver a good customer experience and consequently, generate more revenue. Hence, the image galleries need to be of the highest quality.


The current state of automated argumentation theory: a literature review

arXiv.org Artificial Intelligence

Automated negotiation can be an efficient method for resolving conflict and redistributing resources in a coalition setting. Automated negotiation has already seen increased usage in fields such as e-commerce and power distribution in smart girds, and recent advancements in opponent modelling have proven to deliver better outcomes. However, significant barriers to more widespread adoption remain, such as lack of predictable outcome over time and user trust. Additionally, there have been many recent advancements in the field of reasoning about uncertainty, which could help alleviate both those problems. As there is no recent survey on these two fields, and specifically not on their possible intersection we aim to provide such a survey here.


From unbiased MDI Feature Importance to Explainable AI for Trees

arXiv.org Machine Learning

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed explainable AI for trees algorithms.


Artificial Intelligence Breakthrough: Training and Image Recognition on Low Power CPU (with no GPU), via Explainable-AI for Smart Appliance Pilot for Bosch

#artificialintelligence

Z Advanced Computing, Inc. (ZAC), the pioneer startup on Explainable-AI (Artificial Intelligence) (XAI), is developing its Smart Home product line through a paid-pilot for Smart Appliances for BSH Home Appliances (a subsidiary of the Bosch Group, originally a joint venture between Bosch and Siemens), the largest manufacturer of home appliances in Europe and one of the largest in the world. ZAC just successfully finished its Phase 1 of the pilot program. "Our cognitive-based algorithm is more robust, resilient, consistent, and reproducible, with a higher accuracy, than Convolutional Neural Nets or GANs, which others are using now. It also requires much smaller number of training samples, compared to CNNs, which is a huge advantage," said Dr. Saied Tadayon, CTO of ZAC. "We did the entire work on a regular laptop, for both training and recognition, without any dedicated GPU. So, our computing requirement is much smaller than a typical Neural Net, which requires a dedicated GPU," continued Dr. Bijan Tadayon, CEO of ZAC.


Will XAI become the key factor to future Artificial Intelligence adoption?

#artificialintelligence

Explainable Artificial Intelligence (XAI) seems to be a hot topic nowadays. It is a topic I came across recently in a number of instances: workshops organized by the European Defense Agency (EDA), posts from technology partners such as Expert System (here) or internal discussion with SDL's Research team. The straightforward definition of XAI comes from Wikipedia: "Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implementation of the social right to explanation."