Collaborating Authors

Generating Counterfactual and Contrastive Explanations using SHAP Artificial Intelligence

With the advent of GDPR, the domain of explainable AI and model interpretability has gained added GDPR's Right to Explanation impetus. Methods to extract and communicate visibility General Data Protection Regulation (GDPR) is a regulation into decision-making models have become focused on data protection and regulations regarding algorithmic legal requirement. Two specific types of explanations, decision-making and is abiding on companies operating contrastive and counterfactual have been in the European Union. One of the controversial regulations identified as suitable for human understanding. In of this directive is the'Right to Explanation' which allows this paper, we propose a model agnostic method those significantly (socially) impacted by the decision of an and its systemic implementation to generate these algorithm to demand an explanation or rationale behind the explanations using shapely additive explanations decision (Eg: Being denied a loan application).

A Benchmark Arabic Dataset for Commonsense Explanation Artificial Intelligence

Language comprehension and commonsense knowledge validation by machines are challenging tasks that are still under researched and evaluated for Arabic text. In this paper, we present a benchmark Arabic dataset for commonsense explanation. The dataset consists of Arabic sentences that does not make sense along with three choices to select among them the one that explains why the sentence is false. Furthermore, this paper presents baseline results to assist and encourage the future evaluation of research in this field. The dataset is distributed under the Creative Commons CC-BY-SA 4.0 license and can be found on GitHub

If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques Artificial Intelligence

In recent years, there has been an explosion of AI research on counterfactual explanations as a solution to the problem of eXplainable AI (XAI). These explanations seem to offer technical, psychological and legal benefits over other explanation techniques. We survey 100 distinct counterfactual explanation methods reported in the literature. This survey addresses the extent to which these methods have been adequately evaluated, both psychologically and computationally, and quantifies the shortfalls occurring. For instance, only 21% of these methods have been user tested. Five key deficits in the evaluation of these methods are detailed and a roadmap, with standardised benchmark evaluations, is proposed to resolve the issues arising; issues, that currently effectively block scientific progress in this field.

Kim Jong Un's Wife Is Missing: North Korean First Lady Ri Sol-ju Hasn't Been Seen In Public Since March

International Business Times

For seven months, North Korean dictator Kim Jong Un's wife hasn't been seen at any public events. The disappearance of Ri Sol-ju has left experts who monitor Kim and the North Korean government scratching their heads to try and come up with a plausible explanation as to where she is. Toshimitsu Shigemura, a professor at Tokyo's Waseda University who studies Pyongyang leadership, told the Telegraph in a report published Monday that there could be several explanations for Ri's disappearance, including family infighting and pregnancy. "There are several possible reasons, including that she is pregnant or that there is some sort of problem between the two of them," Shigemura said. "There have also been reports of instability in Pyongyang and even of several attempted attacks, including by factions in the North Korean military, against Kim last year."

Evaluation of Local Explanation Methods for Multivariate Time Series Forecasting Machine Learning

Being able to interpret a machine learning model is a crucial task in many applications of machine learning. Specifically, local interpretability is important in determining why a model makes particular predictions. Despite the recent focus on AI interpretability, there has been a lack of research in local interpretability methods for time series forecasting while the few interpretable methods that exist mainly focus on time series classification tasks. In this study, we propose two novel evaluation metrics for time series forecasting: Area Over the Perturbation Curve for Regression and Ablation Percentage Threshold. These two metrics can measure the local fidelity of local explanation models. We extend the theoretical foundation to collect experimental results on two popular datasets, \textit{Rossmann sales} and \textit{electricity}. Both metrics enable a comprehensive comparison of numerous local explanation models and find which metrics are more sensitive. Lastly, we provide heuristical reasoning for this analysis.