pi-explanation
eccd2a86bae4728b38627162ba297828-Paper.pdf
In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration ofPI-explanations can beobtained with polynomial delay. Experimental results demonstrate the performance gains ofthe newalgorithms when compared with earlierwork.
- Europe > France (0.05)
- Oceania > Australia (0.04)
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
eccd2a86bae4728b38627162ba297828-AuthorFeedback.pdf
First, LCs and2 NBCs are extensively used in different settings, with NBCs being deemed by some as one of the top algorithms in3 datamining. Q3: We will cover the references mentioned by the reviewer. However, that is orthogonal to our work.37 If one fixes the linear model we will compute rigorous PI-explanation in log-linear time.
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Oceania > Australia (0.04)
- (2 more...)
- Law (0.46)
- Media (0.38)
- Leisure & Entertainment (0.38)
A Supplementary Material
Thus, Proposition 1 yields a smallest PI-explanation.Proposition 2. PI-explanations of an XLC can be enumerated with log-linear delay. This means that all PIexplanations will be found. Thus, all leaves correspond to PI-explanations. Figure 5: Percentage of important "hits" of explanations produced by Anchor and SHAP .
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work.
Towards a Shapley Value Graph Framework for Medical peer-influence
Duell, Jamie, Seisenberger, Monika, Aarts, Gert, Zhou, Shangming, Fan, Xiuyi
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research with a variety of techniques and libraries coming to fruition in recent years, e.g., model agnostic explanations [1, 2], counter-factual explanations [3, 4], contrastive explanations [5] and argumentation-based explanations [6, 7]. XAI methods are ubiquitous across fields of Machine Learning (ML), where the trust factor associated with applied ML is undermined due to the black-box nature of methods. Generally speaking, a ML model takes a set of inputs (features) and predicts some output; and existing works on XAI predominantly focus on understanding relations between features and output. These approaches in XAI are successful in many areas as they suggest how an output of a model might change, should we change its inputs. Thus, interventions - manipulating inputs in specific ways with the hope of reaching some desired outcome - can be provoked using existing XAI methods when they are capable of providing relatively accurate explanations [8, 9]. However, with existing XAI holding little knowledge to consequences of interventions [10], such intervention could be susceptible to error. From both a business and ethical stand-point, we must reach beyond understanding relations between features and their outputs; we also need to understand the influence that features have on one another. We believe such knowledge holds the key to deeper understanding of model behaviours and identification of suitable interventions.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Singapore (0.04)
Efficient Explanations for Knowledge Compilation Languages
Huang, Xuanxiang, Izza, Yacine, Ignatiev, Alexey, Cooper, Martin C., Asher, Nicholas, Marques-Silva, Joao
Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML). In most applications, one natural question is how to explain the decisions made by models represented by a KC language. This paper shows that for many of the best known KC languages, well-known classes of explanations can be computed in polynomial time. These classes include deterministic decomposable negation normal form (d-DNNF), and so any KC language that is strictly less succinct than d-DNNF. Furthermore, the paper also investigates the conditions under which polynomial time computation of explanations can be extended to KC languages more succinct than d-DNNF.
On Efficiently Explaining Graph-Based Classifiers
Huang, Xuanxiang, Izza, Yacine, Ignatiev, Alexey, Marques-Silva, Joao
Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper shows that for a wide range of classifiers, globally referred to as decision graphs, and which include decision trees and binary decision diagrams, but also their multi-valued variants, there exist polynomial-time algorithms for computing one PI-explanation. In addition, the paper also proposes a polynomial-time algorithm for computing one contrastive explanation. These novel algorithms build on explanation graphs (XpG's). XpG's denote a graph representation that enables both theoretical and practically efficient computation of explanations for decision graphs. Furthermore, the paper pro- poses a practically efficient solution for the enumeration of explanations, and studies the complexity of deciding whether a given feature is included in some explanation. For the concrete case of decision trees, the paper shows that the set of all contrastive explanations can be enumerated in polynomial time. Finally, the experimental results validate the practical applicability of the algorithms proposed in the paper on a wide range of publicly available benchmarks.
- Europe (0.28)
- North America > United States (0.28)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.75)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.75)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)