Goto

Collaborating Authors

 Gogate, Vibhav




The Inclusion-Exclusion Rule and its Application to the Junction Tree Algorithm

AAAI Conferences

In this paper, we consider the inclusion-exclusion rule – a known yet seldom used rule of probabilistic inference. Unlike the widely used sum rule which requires easy access to all joint probability values, the inclusion-exclusion rule requires easy access to several marginal probability values. We therefore develop a new representation of the joint distribution that is amenable to the inclusion-exclusion rule. We compare the relative strengths and weaknesses of the inclusion-exclusion rule with the sum rule and develop a hybrid rule called the inclusion- exclusion-sum (IES) rule, which combines their power. We apply the IES rule to junction trees, treating the latter as a target for knowledge compilation and show that in many cases it greatly reduces the time required to answer queries. Our experiments demonstrate the power of our approach. In particular, at query time, on several networks, our new scheme was an order of magnitude faster than the junction tree algorithm.


GiSS: Combining Gibbs Sampling and SampleSearch for Inference in Mixed Probabilistic and Deterministic Graphical Models

AAAI Conferences

Mixed probabilistic and deterministic graphical models are ubiquitous in real-world applications. Unfortunately, Gibbs sampling, a popular MCMC technique, does not converge to the correct answers in presence of determinism and therefore cannot be used for inference in such models. In this paper, we propose to remedy this problem by combining Gibbs sampling with SampleSearch, an advanced importance sampling technique which leverages complete SAT/CSP solvers to generate high quality samples from hard deterministic spaces. We call the resulting algorithm, GiSS. Unlike Gibbs sampling which yields unweighted samples, GiSS yields weighted samples. Computing these weights exactly can be computationally expensive and therefore we propose several approximations. We show that our approximate weighting schemes yield consistent estimates and demonstrate experimentally that GiSS is competitive in terms of accuracy with state-of-the-art algorithms such as SampleSearch, MC-SAT and Belief propagation.


Lifting WALKSAT-Based Local Search Algorithms for MAP Inference

AAAI Conferences

In this short position paper, we consider MaxWalkSAT, a local search algorithm for MAP inference in probabilistic graphical models, and lift it to the first-order level, yielding a powerful algorithm for MAP inference in Markov logic networks (MLNs). Lifted MaxWalkSAT is based on the observation that if the MLN is monadic, namely if each predicate is unary then MaxWalkSAT is completely liftable in the sense that no grounding is required at inference time. We propose to utilize this observation in a straight-forward manner: convert the MLN to an equivalent monadic MLN by grounding a subset of its logical variables and then apply lifted MaxWalkSAT on it. It turns out however that the problem of finding the smallest subset of logical variables which when grounded will yield a monadic MLN is NP-hard in general and therefore we propose an approximation algorithm for solving it.


On Lifting the Gibbs Sampling Algorithm

Neural Information Processing Systems

Statistical relational learning models combine the power of first-order logic, the de facto tool for handling relational structure, with that of probabilistic graphical models, the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms for them have been the subject of much recent research. The main idea in these algorithms is to improve the speed, accuracy and scalability of existing graphical models' inference algorithms by exploiting symmetry in the first-order representation. In this paper, we consider blocked Gibbs sampling, an advanced variation of the classic Gibbs sampling algorithm and lift it to the first-order level. We propose to achieve this by partitioning the first-order atoms in the relational model into a set of disjoint clusters such that exact lifted inference is polynomial in each cluster given an assignment to all other atoms not in the cluster. We propose an approach for constructing such clusters and determining their complexity and show how it can be used to trade accuracy with computational complexity in a principled manner. Our experimental evaluation shows that lifted Gibbs sampling is superior to the propositional algorithm in terms of accuracy and convergence.


Advances in Lifted Importance Sampling

AAAI Conferences

We consider lifted importance sampling (LIS), a previously proposed approximate inference algorithm for statistical relational learning (SRL) models. LIS achieves substantial variance reduction over conventional importance sampling by using various lifting rules that take advantage of the symmetry in the relational representation. However, it suffers from two drawbacks. First, it does not take advantage of some important symmetries in the relational representation and may exhibit needlessly high variance on models having these symmetries. Second, it uses an uninformative proposal distribution which adversely affects its accuracy. We propose two improvements to LIS that address these limitations. First, we identify a new symmetry in SRL models and define a lifting rule for taking advantage of this symmetry. The lifting rule reduces the variance of LIS. Second, we propose a new, structured approach for constructing and dynamically updating the proposal distribution via adaptive sampling. We demonstrate experimentally that our new, improved LIS algorithm is substantially more accurate than the LIS algorithm.


A Complete Anytime Algorithm for Treewidth

arXiv.org Artificial Intelligence

In this paper, we present a Branch and Bound algorithm called QuickBB for computing the treewidth of an undirected graph. This algorithm performs a search in the space of perfect elimination ordering of vertices of the graph. The algorithm uses novel pruning and propagation techniques which are derived from the theory of graph minors and graph isomorphism. We present a new algorithm called minor-min-width for computing a lower bound on treewidth that is used within the branch and bound algorithm and which improves over earlier available lower bounds. Empirical evaluation of QuickBB on randomly generated graphs and benchmarks in Graph Coloring and Bayesian Networks shows that it is consistently better than complete algorithms like QuickTree [Shoikhet and Geiger, 1997] in terms of cpu time. QuickBB also has good anytime performance, being able to generate a better upper bound on treewidth of some graphs whose optimal treewidth could not be computed up to now.


Studies in Lower Bounding Probabilities of Evidence using the Markov Inequality

arXiv.org Artificial Intelligence

Computing the probability of evidence even with known error bounds is NP-hard. In this paper we address this hard problem by settling on an easier problem. We propose an approximation which provides high confidence lower bounds on probability of evidence but does not have any guarantees in terms of relative or absolute error. Our proposed approximation is a randomized importance sampling scheme that uses the Markov inequality. However, a straight-forward application of the Markov inequality may lead to poor lower bounds. We therefore propose several heuristic measures to improve its performance in practice. Empirical evaluation of our scheme with state-of- the-art lower bounding schemes reveals the promise of our approach.


AND/OR Importance Sampling

arXiv.org Artificial Intelligence

The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is far more accurate than importance sampling in many cases.