pi-explanation
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations.
A Supplementary Material
Thus, Proposition 1 yields a smallest PI-explanation.Proposition 2. PI-explanations of an XLC can be enumerated with log-linear delay. This means that all PIexplanations will be found. Thus, all leaves correspond to PI-explanations. Figure 5: Percentage of important "hits" of explanations produced by Anchor and SHAP .
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Oceania > Australia (0.04)
- (2 more...)
- Law (0.46)
- Media (0.38)
- Leisure & Entertainment (0.38)
Common comments
Some reviewers raised the issue of the paper's "narrow interest" given its "focused contribu-1 We cannot agree with the reviewers. Our work can be used jointly with these heuristic explainers for computing (local) PI-explanations. Third, interpretability depends on the number of features one needs to account for. We are very happy to read the reviewer's comment: "this will really shift the way I think about this We believe our results will impact a broader community if the paper is accepted. It should be underscored that linear models are used for explaining complex ML models.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Oceania > Australia (0.04)
- (2 more...)
- Law (0.46)
- Media (0.38)
- Leisure & Entertainment (0.38)
A Supplementary Material
Thus, Proposition 1 yields a smallest PI-explanation.Proposition 2. PI-explanations of an XLC can be enumerated with log-linear delay. This means that all PIexplanations will be found. Thus, all leaves correspond to PI-explanations. Figure 5: Percentage of important "hits" of explanations produced by Anchor and SHAP .
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work.
Towards a Shapley Value Graph Framework for Medical peer-influence
Duell, Jamie, Seisenberger, Monika, Aarts, Gert, Zhou, Shangming, Fan, Xiuyi
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research with a variety of techniques and libraries coming to fruition in recent years, e.g., model agnostic explanations [1, 2], counter-factual explanations [3, 4], contrastive explanations [5] and argumentation-based explanations [6, 7]. XAI methods are ubiquitous across fields of Machine Learning (ML), where the trust factor associated with applied ML is undermined due to the black-box nature of methods. Generally speaking, a ML model takes a set of inputs (features) and predicts some output; and existing works on XAI predominantly focus on understanding relations between features and output. These approaches in XAI are successful in many areas as they suggest how an output of a model might change, should we change its inputs. Thus, interventions - manipulating inputs in specific ways with the hope of reaching some desired outcome - can be provoked using existing XAI methods when they are capable of providing relatively accurate explanations [8, 9]. However, with existing XAI holding little knowledge to consequences of interventions [10], such intervention could be susceptible to error. From both a business and ethical stand-point, we must reach beyond understanding relations between features and their outputs; we also need to understand the influence that features have on one another. We believe such knowledge holds the key to deeper understanding of model behaviours and identification of suitable interventions.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Singapore (0.04)
Efficient Explanations for Knowledge Compilation Languages
Huang, Xuanxiang, Izza, Yacine, Ignatiev, Alexey, Cooper, Martin C., Asher, Nicholas, Marques-Silva, Joao
Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML). In most applications, one natural question is how to explain the decisions made by models represented by a KC language. This paper shows that for many of the best known KC languages, well-known classes of explanations can be computed in polynomial time. These classes include deterministic decomposable negation normal form (d-DNNF), and so any KC language that is strictly less succinct than d-DNNF. Furthermore, the paper also investigates the conditions under which polynomial time computation of explanations can be extended to KC languages more succinct than d-DNNF.