Lifted Inference Seen from the Other Side : The Tractable Features

Neural Information Processing Systems

Lifted inference algorithms for representations that combine first-order logic and probabilistic graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable elimination, belief propagation etc.) and improve its efficiency by exploiting repeated structure in the first-order model. In this paper, we propose an approach from the other side in that we use techniques from logic for probabilistic inference. In particular, we define a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible. We show that our rules yield several new tractable classes that cannot be solved efficiently by any of the existing techniques.


Inference for a New Probabilistic Constraint Logic

AAAI Conferences

Probabilistic logics combine the expressive power of logic with the ability to reason with uncertainty. Several probabilistic logic languages have been proposed in the past, each of them with their own features. In this paper, we propose a new probabilistic constraint logic programming language, which combines constraint logic programming with probabilistic reasoning. The language supports modeling of discrete as well as continuous probability distributions by expressing constraints on random variables. We introduce the declarative semantics of this language, present an exact inference algorithm to derive bounds on the joint probability distributions consistent with the specified constraints, and give experimental results. The results obtained are encouraging, indicating that inference in our language is feasible for solving challenging problems.


mere

AAAI Conferences

A COMPARISON OF THE COMMONSENSE AND FIXED POINT THEORIES OF NONMONOTONICITY Dr. Frank M. Brown Department of Computer SC ience University of Kansas Lawrence Kansas ABSTRACT The mathematical fixed point theories of nonmonotonic reasoning are examined and compared to a commonsense theory of nonmonotonic reasoning which models our intuitive ability to reason about defaults. It is shown that all of the known problems of the fixed point theories are solved by the commonsense theory. The concepts of this commonsense theory do not involve mathematical fixed points, but instead are explicitly defined in a monotonic modal quantificational logic which captures the modal notion of logical truth. IINTRODUCTION A number of recent papers [McDermott & Doyle, McDermott, Moore, and Reiter] have attempted to formalize the commonsense notion of something being possible with respect to what is assumed. All these papers have been based on the mathematical theory of fixed points.


The Divide-and-Conquer Subgoal-Ordering Algorithm for Speeding up Logic Inference

Journal of Artificial Intelligence Research

It is common to view programs as a combination of logic and control: the logic part defines what the program must do, the control part -- how to do it. The Logic Programming paradigm was developed with the intention of separating the logic from the control. Recently, extensive research has been conducted on automatic generation of control for logic programs. Only a few of these works considered the issue of automatic generation of control for improving the efficiency of logic programs. In this paper we present a novel algorithm for automatic finding of lowest-cost subgoal orderings. The algorithm works using the divide-and-conquer strategy. The given set of subgoals is partitioned into smaller sets, based on co-occurrence of free variables. The subsets are ordered recursively and merged, yielding a provably optimal order. We experimentally demonstrate the utility of the algorithm by testing it in several domains, and discuss the possibilities of its cooperation with other existing methods.


Inference in Probabilistic Logic Programs using Weighted CNF's

arXiv.org Artificial Intelligence

Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in probabilistic logic programming.