Goto

Collaborating Authors

 probabilistic logic programming


Semirings for Probabilistic and Neuro-Symbolic Logic Programming

Derkinderen, Vincent, Manhaeve, Robin, Martires, Pedro Zuidberg Dos, De Raedt, Luc

arXiv.org Artificial Intelligence

The original framework of Poole and Sato extended the logic programming language Prolog (Flach, 1994) with probabilistic facts. These are facts that are annotated with the probability that they are true; they play a role similar to the parentless nodes in Bayesian networks in that they are marginally independent of one another, and that the probabilistic dependencies are induced by the rules of the logic program. This resulted in the celebrated distribution semantics (Sato, 1995) that is the basis of probabilistic logic programming, and the corresponding learning algorithm in the PRISM language (Sato, 1995) constitutes - to the best of the authors' knowledge - the very first probabilistic programming language with built-in support for machine learning. The work of Sato and Poole has inspired many follow-up works on inference and learning, and has also introduced many variations and extensions of the probabilistic logic programming and its celebrated distribution semantics.


Explanations as Programs in Probabilistic Logic Programming

Vidal, Germán

arXiv.org Artificial Intelligence

The generation of comprehensible explanations is an essential feature of modern artificial intelligence systems. In this work, we consider probabilistic logic programming, an extension of logic programming which can be useful to model domains with relational structure and uncertainty. Essentially, a program specifies a probability distribution over possible worlds (i.e., sets of facts). The notion of explanation is typically associated with that of a world, so that one often looks for the most probable world as well as for the worlds where the query is true. Unfortunately, such explanations exhibit no causal structure. In particular, the chain of inferences required for a specific prediction (represented by a query) is not shown. In this paper, we propose a novel approach where explanations are represented as programs that are generated from a given query by a number of unfolding-like transformations. Here, the chain of inferences that proves a given query is made explicit. Furthermore, the generated explanations are minimal (i.e., contain no irrelevant information) and can be parameterized w.r.t. a specification of visible predicates, so that the user may hide uninteresting details from explanations.


The generalised distribution semantics and projective families of distributions

Weitkämper, Felix

arXiv.org Artificial Intelligence

This abstracts the core ideas beyond logic programming as such to encompass frameworks from probabilistic databases, probabilistic finite model theory and discrete lifted Bayesian networks. To demonstrate the usefulness of such a general approach, we completely characterise the projective families of distributions representable in the generalised distribution semantics and we demonstrate both that large classes of interesting projective families cannot be represented in a generalised distribution semantics and that already a very limited fragment of logic programming (acyclic determinate logic programs) in the determinsitic part suffices to represent all those projective families that are representable in the generalised distribution semantics at all.


Syntactic Requirements for Well-defined Hybrid Probabilistic Logic Programs

Azzolini, Damiano, Riguzzi, Fabrizio

arXiv.org Artificial Intelligence

The power and expressivity of Probabilistic Logic Programming (PLP) [8, 18] have been utilized to represent many real world situations [2, 9, 14]. Usually, probabilistic logic programs involve only discrete random variables with Bernoulli or Categorical distributions. Numerous solutions emerged to also handle continuous distributions [10, 12, 25], increasing the expressiveness of PLP and giving birth to hybrid probabilistic logic programs, that is, programs that include discrete and continuous random variables. Inference in this type of programs is hard since it combines the complexity of the grounding computation with the intractability of a distribution defined by a mixture of random variables. Usually, inference in general hybrid probabilistic logic programs (i.e., without imposing restrictions on the type of distributions allowed) is done by leveraging knowledge compilation and using external solvers [25] or by sampling [4, 16].


An asymptotic analysis of probabilistic logic programming with implications for expressing projective families of distributions

Weitkämper, Felix

arXiv.org Artificial Intelligence

Over the last years, there has been increasing research on the scaling behaviour of statistical relational representations with the size of the domain, and on the connections between domain size dependence and lifted inference. In particular, the asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was isolated as the strongest form of domain size independence. In this contribution we show that every probabilistic logic program under the distribution semantics is asymptotically equivalent to a probabilistic logic program consisting only of range-restricted clauses over probabilistic facts. To facilitate the application of classical results from finite model theory, we introduce the abstract distribution semantics, defined as an arbitrary logical theory over probabilistic facts to bridge the gap to the distribution semantics underlying probabilistic logic programming. In this representation, range-restricted logic programs correspond to quantifier-free theories, making asymptotic quantifier results avilable for use. We can conclude that every probabilistic logic program inducing a projective family of distributions is in fact captured by this class, and we can infer interesting consequences for the expressivity of probabilistic logic programs as well as for the asymptotic behaviour of probabilistic rules.


MAP Inference for Probabilistic Logic Programming

Bellodi, Elena, Alberti, Marco, Riguzzi, Fabrizio, Zese, Riccardo

arXiv.org Artificial Intelligence

In Probabilistic Logic Programming (PLP) the most commonly studied inference task is to compute the marginal probability of a query given a program. In this paper, we consider two other important tasks in the PLP setting: the Maximum-A-Posteriori (MAP) inference task, which determines the most likely values for a subset of the random variables given evidence on other variables, and the Most Probable Explanation (MPE) task, the instance of MAP where the query variables are the complement of the evidence variables. We present a novel algorithm, included in the PITA reasoner, which tackles these tasks by representing each problem as a Binary Decision Diagram and applying a dynamic programming procedure on it. We compare our algorithm with the version of ProbLog that admits annotated disjunctions and can perform MAP and MPE inference. Experiments on several synthetic datasets show that PITA outperforms ProbLog in many cases. This paper is under consideration for acceptance in Theory and Practice of Logic Programming.


Foundations of Probabilistic Logic Programming

#artificialintelligence

Probabilistic Logic Programming extends Logic Programming by enabling the representation of uncertain information. Probabilistic Logic Programming is at the intersection of two wider research fields: the integration of logic and probability and Probabilistic Programming. Logic enables the representation of complex relations among entities while probability theory is useful for model uncertainty over attributes and relations. Combining the two is a very active field of study. Probabilistic Programming extends programming languages with probabilistic primitives that can be used to write complex probabilistic models.


Negation Without Negation in Probabilistic Logic Programming

Buchman, David (University of British Columbia) | Poole, David (University of British Columbia)

AAAI Conferences

Probabilistic logic programs without negation can have cycles (with a preference for false), but cannot represent all conditional distributions. Probabilistic logic programs with negation can represent arbitrary conditional probabilities, but with cycles they create logical inconsistencies. We show how allowing negative noise probabilities allows us to represent arbitrary conditional probabilities without negations. Noise probabilities for non-exclusive rules are difficult to interpret and unintuitive to manipulate; to alleviate this we define ``probability-strengths'' which provide an intuitive additive algebra for combining rules. For acyclic programs we prove what constraints on the strengths allow for proper distributions on the non-noise variables and allow for all non-extreme distributions to be represented. We show how arbitrary CPDs can be converted into this form in a canonical way. Furthermore, if a joint distribution can be compactly represented by a cyclic program with negations, we show how it can also be compactly represented with negative noise probabilities and no negations. This allows algorithms for exact inference that do not support negations to be applicable to probabilistic logic programs with negations.


Lifted Variable Elimination for Probabilistic Logic Programming

Bellodi, Elena, Lamma, Evelina, Riguzzi, Fabrizio, Costa, Vitor Santos, Zese, Riccardo

arXiv.org Artificial Intelligence

Lifted inference has been proposed for various probabilistic logical frameworks in order to compute the probability of queries in a time that depends on the size of the domains of the random variables rather than the number of instances. Even if various authors have underlined its importance for probabilistic logic programming (PLP), lifted inference has been applied up to now only to relational languages outside of logic programming. In this paper we adapt Generalized Counting First Order Variable Elimination (GC-FOVE) to the problem of computing the probability of queries to probabilistic logic programs under the distribution semantics. In particular, we extend the Prolog Factor Language (PFL) to include two new types of factors that are needed for representing ProbLog programs. These factors take into account the existing causal independence relationships among random variables and are managed by the extension to variable elimination proposed by Zhang and Poole for dealing with convergent variables and heterogeneous factors. Two new operators are added to GC-FOVE for treating heterogeneous factors. The resulting algorithm, called LP$^2$ for Lifted Probabilistic Logic Programming, has been implemented by modifying the PFL implementation of GC-FOVE and tested on three benchmarks for lifted inference. A comparison with PITA and ProbLog2 shows the potential of the approach.


Probabilistic Logic Programming under Inheritance with Overriding

Lukasiewicz, Thomas

arXiv.org Artificial Intelligence

We present probabilistic logic programming under inheritance with overriding. This approach is based on new notions of entailment for reasoning with conditional constraints, which are obtained from the classical notion of logical entailment by adding the principle of inheritance with overriding. This is done by using recent approaches to probabilistic default reasoning with conditional constraints. We analyze the semantic properties of the new entailment relations. We also present algorithms for probabilistic logic programming under inheritance with overriding, and program transformations for an increased efficiency.