Goto

Collaborating Authors

 Darwiche, Adnan


Approximating the Partition Function by Deleting and then Correcting for Model Edges

arXiv.org Machine Learning

We propose an approach for approximating the partition function which is based on two steps: (1) computing the partition function of a simplified model which is obtained by deleting model edges, and (2) rectifying the result by applying an edge-by-edge correction. The approach leads to an intuitive framework in which one can trade-off the quality of an approximation with the complexity of computing it. It also includes the Bethe free energy approximation as a degenerate case. We develop the approach theoretically in this paper and provide a number of empirical results that reveal its practical utility.


EDML: A Method for Learning Parameters in Bayesian Networks

arXiv.org Artificial Intelligence

We propose a method called EDML for learning MAP parameters in binary Bayesian networks under incomplete data. The method assumes Beta priors and can be used to learn maximum likelihood parameters when the priors are uninformative. EDML exhibits interesting behaviors, especially when compared to EM. We introduce EDML, explain its origin, and study some of its properties both analytically and empirically.


A Lower Bound on the Size of Decomposable Negation Normal Form

AAAI Conferences

We consider in this paper the size of a Decomposable Negation Normal Form (DNNF) that respects a given vtree (known as structured DNNF). This representation of propositional knowledge bases was introduced recently and shown to include OBDD as a special case (an OBDD variable ordering is a special type of vtree). We provide a lower bound on the size of any structured DNNF and discuss three particular instances of this bound, which correspond to three distinct subsets of structured DNNF (including OBDD). We show that our lower bound subsumes the influential Sieling and Wegenerโ€™s lower bound for OBDDs, which is typically used for identifying variable orderings that will cause a blowup in the OBDD size. We show that our lower bound allows for similar usage but with respect to vtrees, which provide structure for DNNFs in the same way that variable orderings provide structure for OBDDs. We finally discuss some of the theoretical and practical implications of our lower bound.


Approximating MAP by Compensating for Structural Relaxations

Neural Information Processing Systems

We introduce a new perspective on approximations to the maximum a posteriori (MAP) task in probabilistic graphical models, that is based on simplifying a given instance, and then tightening the approximation. First, we start with a structural relaxation of the original model. We then infer from the relaxation its deficiencies, and compensate for them. This perspective allows us to identify two distinct classes of approximations. First, we find that max-product belief propagation can be viewed as a way to compensate for a relaxation, based on a particular idealized case for exactness. We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties. We go on to propose a new class of algorithms that, starting with a relaxation, iteratively yields tighter approximations.


A Differential Semantics for Jointree Algorithms

Neural Information Processing Systems

A new approach to inference in belief networks has been recently proposed, which is based on an algebraic representation of belief networks using multi-linear functions. According to this approach, the key computational question is that of representing multi-linear functions compactly, since inference reduces to a simple process of ev aluating and differentiating such functions. W e show here that mainstream inference algorithms based on jointrees are a special case of this approach in a v ery precise sense. W e use this result to prov e new properties of jointree algorithms, and then discuss some of its practical and theoretical implications.


A Differential Semantics for Jointree Algorithms

Neural Information Processing Systems

A new approach to inference in belief networks has been recently proposed, which is based on an algebraic representation of belief networks using multi-linear functions. According to this approach, the key computational question is that of representing multi-linear functions compactly, since inference reduces to a simple process of ev aluating and differentiating such functions. W e show here that mainstream inference algorithms based on jointrees are a special case of this approach in a v ery precise sense. W e use this result to prov e new properties of jointree algorithms, and then discuss some of its practical and theoretical implications.


Model-Based Diagnosis under Real-World Constraints

AI Magazine

I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.


Model-Based Diagnosis under Real-World Constraints

AI Magazine

I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.