Goto

Collaborating Authors

 recurse




Reviews: New Liftable Classes for First-Order Probabilistic Inference

Neural Information Processing Systems

This paper makes a concrete contribution to lifted probabilistic inference, showing that the domain recursion rule can be used to solve certain interesting problems that are intractable for state-of-the-art lifted inference software. The insights here seem likely to be incorporated into upcoming versions of those packages. However, some of the statements in the paper are not sufficiently precise. The description of the domain recursion rule itself (p. 5, top) is much less precise than the definition in the 2011 paper that introduced it. It's not clear what the preconditions are for applying the rule or exactly how it transforms the theory. Also, the description mentions caching (line 175), but it would be helpful to explain how the inference algorithm ends up making multiple calls to the cache.


Understanding Interventional TreeSHAP : How and Why it Works

Laberge, Gabriel, Pequignot, Yann

arXiv.org Artificial Intelligence

Shapley values are ubiquitous in interpretable Machine Learning due to their strong theoretical background and efficient implementation in the SHAP library. Computing these values previously induced an exponential cost with respect to the number of input features of an opaque model. Now, with efficient implementations such as Interventional TreeSHAP, this exponential burden is alleviated assuming one is explaining ensembles of decision trees. Although Interventional TreeSHAP has risen in popularity, it still lacks a formal proof of how/why it works. We provide such proof with the aim of not only increasing the transparency of the algorithm but also to encourage further development of these ideas. Notably, our proof for Interventional TreeSHAP is easily adapted to Shapley-Taylor indices and one-hot-encoded features.


MECHANIZED REASONING

AI Classics

We will define the notions of abstract theorem-proving graph, abstract theorem-proving problem g and search strategy E for g. These concepts generalize the usual tree (or graph) searching problem and admit Hart, Nilsson and Raphael (1968) and Pohl (1969) theories of heuristic search. In particular the admissibility and optimality theorems of Hart, Nilsson and Raphael generalize for the classes 0 and 0" of diagonal search strategies for abstract theorem-proving problems. In addition the subclass au of 0 is shown to be optimal for 2. Implementation of diagonal search is treated in some detail for theorem-proving by resolution rules (Robinson 1965). SEARCH STRATEGIES, COMPLETENESS AND EFFICIENCY Completeness and efficiency of proof procedures can be studied only in the context of search strategies. A system T of inference rules and axioms can be complete or incomplete for a given class of intended interpretations. Similarly a search strategy E for T may or may not be complete for ...


Non-Negative Matrix Factorization, Convexity and Isometry

Vasiloglou, Nikolaos, Gray, Alexander G., Anderson, David V.

arXiv.org Artificial Intelligence

In this paper we explore avenues for improving the reliability of dimensionality reduction methods such as Non-Negative Matrix Factorization (NMF) as interpretive exploratory data analysis tools. We first explore the difficulties of the optimization problem underlying NMF, showing for the first time that non-trivial NMF solutions always exist and that the optimization problem is actually convex, by using the theory of Completely Positive Factorization. We subsequently explore four novel approaches to finding globally-optimal NMF solutions using various ideas from convex optimization. We then develop a new method, isometric NMF (isoNMF), which preserves non-negativity while also providing an isometric embedding, simultaneously achieving two properties which are helpful for interpretation. Though it results in a more difficult optimization problem, we show experimentally that the resulting method is scalable and even achieves more compact spectra than standard NMF.