Goto

Collaborating Authors

 Nath, Aniruddh


Learning Tractable Probabilistic Models for Fault Localization

AAAI Conferences

In recent years, several probabilistic techniques have been applied to various debugging problems. However, most existing probabilistic debugging systems use relatively simple statistical models, and fail to generalize across multiple programs. In this work, we propose Tractable Fault Localization Models (TFLMs) that can be learned from data, and probabilistically infer the location of the bug. While most previous statistical debugging methods generalize over many executions of a single program, TFLMs are trained on a corpus of previously seen buggy programs, and learn to identify recurring patterns of bugs. Widely-used fault localization techniques such as TARANTULA evaluate the suspiciousness of each line in isolation; in contrast, a TFLM defines a joint probability distribution over buggy indicator variables for each line. Joint distributions with rich dependency structure are often computationally intractable; TFLMs avoid this by exploiting recent developments in tractable probabilistic models (specifically, Relational SPNs). Further, TFLMs can incorporate additional sources of information, including coverage-based features such as TARANTULA. We evaluate the fault localization performance of TFLMs that include TARANTULA scores as features in the probabilistic model. Our study shows that the learned TFLMs isolate bugs more effectively than previous statistical methods or using TARANTULA directly.


Learning Relational Sum-Product Networks

AAAI Conferences

Sum-product networks (SPNs) are a recently-proposed deep architecture that guarantees tractable inference, even on certain high-treewidth models. SPNs are a propositional architecture, treating the instances as independent and identically distributed. In this paper, we introduce Relational Sum-Product Networks (RSPNs), a new tractable first-order probabilistic architecture. RSPNs generalize SPNs by modeling a set of instances jointly, allowing them to influence each other's probability distributions, as well as modeling probabilities of relations between objects. We also present LearnRSPN, the first algorithm for learning high-treewidth tractable statistical relational models. LearnRSPN is a recursive top-down structure learning algorithm for RSPNs, based on Gens and Domingos' LearnSPN algorithm for propositional SPN learning. We evaluate the algorithm on three datasets; the RSPN learning algorithm outperforms Markov Logic Networks in both running time and predictive accuracy.


Automated Debugging with Tractable Probabilistic Programming

AAAI Conferences

Probabilistic programming languages allow domain experts to specify generative models in a high-level language, and reason about those models using domain-independent algorithms. Given an input, a probabilistic program generates a distribution over outputs. In this work, we instead use probabilistic programming to explicitly reason about the distribution over programs, rather than outputs. We propose Tractable Probabilistic Programs (TPP), a language to represent rich probabilistic dependencies between different parts of a program; we make use of the recent work on sum-product networks to ensure that inference remains tractable. We explain how TPP can be applied to the problem of automated program debugging; given a corpus of buggy programs, a TPP model can be learned to capture a probability distribution over the location of the bug. The model can also incorporate additional sources of information, such as coverage statistics on test suites. We also briefly outline how TPP can be used to solve the more ambitious problem of fault correction, i.e. predicting the most probable true program conditioned on a buggy one. The ability to learn common patterns of bugs and incorporate multiple sources of information potentially makes TPP useful as a unifying framework for automated program debugging.


Learning Tractable Statistical Relational Models

AAAI Conferences

Intractable inference has been a major barrier to the wide adoption of statistical relational models. Existing exact methods suffer from a lack of scalability, and approximate methods tend to be unreliable. Sum-product networks (SPNs; Poon and Domingos 2011) are a recently-proposed probabilistic architecture that guarantees tractable exact inference, even on many high-treewidth models. SPNs are a propositional architecture, treating the instances as independent and identically distributed. In this paper, we extend SPNs to the relational setting, resulting in Relational Sum-Product Networks (RSPNs). Previous tractable statistical relational models (Domingos and Webb 2012; Webb and Domingos 2013) defined their models over a pre-determined set of objects, and therefore could not be generalized to new mega-examples. In contrast, RSPNs can be learned and applied to previous unseen examples. We present a learning algorithm for RSPNs; in preliminary experiments, RSPNs outperform Markov Logic Networks (Richardson and Domingos 2006) in both running time and predictive accuracy.


Approximate Lifting Techniques for Belief Propagation

AAAI Conferences

Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carry out the inference on the ground network. Lifted inference techniques (such as lifted belief propagation; Singla and Domingos 2008) provide a more scalable approach to inference by combining together groups of objects which behave identically. In many cases, constructing the lifted network can itself be quite costly. In addition, the exact lifted network is often very close in size to the fully propositionalized model. To overcome these problems, we present approximate lifted inference, which groups together similar but distinguishable objects and treats them as if they were identical. Early stopping terminates the execution of the lifted network construction at an early stage resulting in a coarser network. Noise-tolerant hypercubes allow for marginal errors in the representation of the lifted network itself. Both of our algorithms can significantly speed up the process of lifted network construction as well as result in much smaller models. The coarseness of the approximation can be adjusted depending on the accuracy required, and we can bound the resulting error. Extensive evaluation on six domains demonstrates great efficiency gains with only minor (or no) loss in accuracy.


Counting-MLNs: Learning Relational Structure for Decision Making

AAAI Conferences

Many first-order probabilistic models can be represented much more compactly using aggregation operations such as counting. While traditional statistical relational representations share factors across sets of interchangeable random variables, representations that explicitly model aggregations also exploit interchangeability of random variables within factors. This is especially useful in decision making settings, where an agent might need to reason about counts of the different types of objects it interacts with. Previous work on counting formulas in statistical relational representations has mostly focused on the problem of exact inference on an existing model. The problem of learning such models is largely unexplored. In this paper, we introduce Counting Markov Logic Networks (C-MLNs), an extension of Markov logic networks that can compactly represent complex counting formulas. We present a structure learning algorithm for C-MLNs; we apply this algorithm to the novel problem of generalizing natural language instructions, and to relational reinforcement learning in the Crossblock domain, in which standard MLN learning algorithms fail to find any useful structure. The C-MLN policies learned from natural language instructions are compact and intuitive, and, despite requiring no instructions on test games, win 20% more Crossblock games than a state-of-the-art algorithm for following natural language instructions.


Efficient Lifting for Online Probabilistic Inference

AAAI Conferences

Lifting can greatly reduce the cost of inference on first-order probabilistic graphical models, but constructing the lifted network can itself be quite costly. In online applications (e.g., video segmentation) repeatedly constructing the lifted network for each new inference can be extremely wasteful, because the evidence typically changes little from one inference to the next. The same is true in many other problems that require repeated inference, like utility maximization, MAP inference, interactive inference, parameter and structure learning, etc. In this paper, we propose an efficient algorithm for updating the structure of an existing lifted network with incremental changes to the evidence. This allows us to construct the lifted network once for the initial inference problem, and amortize the cost over the subsequent problems. Experiments on video segmentation and viral marketing problems show that the algorithm greatly reduces the cost of inference without affecting the quality of the solutions.


Efficient Belief Propagation for Utility Maximization and Repeated Inference

AAAI Conferences

Many problems require repeated inference on probabilistic graphical models, with different values for evidence variables or other changes. Examples of such problems include utility maximization, MAP inference, online and interactive inference, parameter and structure learning, and dynamic inference. Since small changes to the evidence typically only affect a small region of the network, repeatedly performing inference from scratch can be massively redundant. In this paper, we propose expanding frontier belief propagation (EFBP), an efficient approximate algorithm for probabilistic inference with incremental changes to the evidence (or model). EFBP is an extension of loopy belief propagation (BP) where each run of inference reuses results from the previous ones, instead of starting from scratch with the new evidence; messages are only propagated in regions of the network affected by the changes. We provide theoretical guarantees bounding the difference in beliefs generated by EFBP and standard BP, and apply EFBP to the problem of expected utility maximization in influence diagrams. Experiments on viral marketing and combinatorial auction problems show that EFBP can converge much faster than BP without significantly affecting the quality of the solutions.


Efficient Lifting for Online Probabilistic Inference

AAAI Conferences

Lifting can greatly reduce the cost of inference on first-order probabilistic graphical models, but constructing the lifted network can itself be quite costly. In online applications (e.g., video segmentation) repeatedly constructing the lifted network for each new inference can be extremely wasteful, because the evidence typically changes little from one inference to the next. The same is true in many other problems that require repeated inference, like utility maximization, MAP inference, interactive inference, parameter and structure learning, etc. In this paper, we propose an efficient algorithm for updating the structure of an existing lifted network with incremental changes to the evidence. This allows us to construct the lifted network once for the initial inference problem, and amortize the cost over the subsequent problems. Experiments on video segmentation and viral marketing problems show that the algorithm greatly reduces the cost of inference without affecting the quality of the solutions.