Goto

Collaborating Authors

 Country


LAYERWIDTH: Analysis of a New Metric for Directed Acyclic Graphs

arXiv.org Artificial Intelligence

We analyze a new property of directed acyclic graphs (DAGs), called layerwidth, arising from a class of DAGs proposed by Eiter and Lukasiewicz. This class of DAGs permits certain problems of structural model-based causality and explanation to be tractably solved. In this paper, we first address an open question raised by Eiter and Lukasiewicz - the computational complexity of deciding whether a given graph has a bounded layerwidth. After proving that this problem is NP-complete, we proceed by proving numerous important properties of layerwidth that are helpful in efficiently computing the optimal layerwidth. Finally, we compare this new DAG property to two other important DAG properties: treewidth and bandwidth.


Optimal Limited Contingency Planning

arXiv.org Artificial Intelligence

For a given problem, the optimal Markov policy can be considerred as a conditional or contingent plan containing a (potentially large) number of branches. Unfortunately, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. For example, it may be that plans must later undergo more detailed simulation to verify correctness and safety, or that they must be simple enough to be understood and analyzed by humans. As a result, it may be necessary to limit consideration to plans with only a small number of branches. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning (OKP). It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a Partially Observable Markov Decision Process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.


Reasoning about Bayesian Network Classifiers

arXiv.org Artificial Intelligence

Bayesian network classifiers are used in many fields, and one common class of classifiers are naive Bayes classifiers. In this paper, we introduce an approach for reasoning about Bayesian network classifiers in which we explicitly convert them into Ordered Decision Diagrams (ODDs), which are then used to reason about the properties of these classifiers. Specifically, we present an algorithm for converting any naive Bayes classifier into an ODD, and we show theoretically and experimentally that this algorithm can give us an ODD that is tractable in size even given an intractable number of instances. Since ODDs are tractable representations of classifiers, our algorithm allows us to efficiently test the equivalence of two naive Bayes classifiers and characterize discrepancies between them. We also show a number of additional results including a count of distinct classifiers that can be induced by changing some CPT in a naive Bayes classifier, and the range of allowable changes to a CPT which keeps the current classifier unchanged.


Decision Making with Partially Consonant Belief Functions

arXiv.org Artificial Intelligence

This paper studies decision making for Walley's partially consonant belief functions (pcb). In a pcb, the set of foci are partitioned. Within each partition, the foci are nested. The pcb class includes probability functions and possibility functions as extreme cases. Unlike earlier proposals for a decision theory with belief functions, we employ an axiomatic approach. We adopt an axiom system similar in spirit to von Neumann - Morgenstern's linear utility theory for a preference relation on pcb lotteries. We prove a representation theorem for this relation. Utility for a pcb lottery is a combination of linear utility for probabilistic lottery and binary utility for possibilistic lottery.


Implementation and Comparison of Solution Methods for Decision Processes with Non-Markovian Rewards

arXiv.org Artificial Intelligence

This paper examines a number of solution methods for decision processes with non-Markovian rewards (NMRDPs). They all exploit a temporal logic specification of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to well-known MDP solution methods. They differ however in the representation of the target MDP and the class of MDP solution methods to which they are suited. As a result, they adopt different temporal logics and different translations. Unfortunately, no implementation of these methods nor experimental let alone comparative results have ever been reported. This paper is the first step towards filling this gap. We describe an integrated system for solving NMRDPs which implements these methods and several variants under a common interface; we use it to compare the various approaches and identify the problem features favoring one over the other.


Inference in Polytrees with Sets of Probabilities

arXiv.org Artificial Intelligence

Inferences in directed acyclic graphs associated with probability sets and probability intervals are NP-hard, even for polytrees. In this paper we focus on such inferences, and propose: 1) a substantial improvement on Tessems A / R algorithm FOR polytrees WITH probability intervals; 2) a new algorithm FOR direction - based local search(IN sets OF probability) that improves ON existing methods; 3) a collection OF branch - AND - bound algorithms that combine the previous techniques.The first two techniques lead TO approximate solutions, WHILE branch - AND - bound procedures can produce either exact OR approximate solutions.We report ON dramatic improvements ON existing techniques FOR inference WITH probability sets AND intervals, IN SOME cases reducing the computational effort BY many orders OF magnitude.


Approximate Inference and Constrained Optimization

arXiv.org Artificial Intelligence

Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy. However, belief propagation does not always converge, which explains the need for approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically nonconvex constrained minimization of the Kikuchi free energy through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.


Value Elimination: Bayesian Inference via Backtracking Search

arXiv.org Artificial Intelligence

Backtracking search is a powerful algorithmic paradigm that can be used to solve many problems. It is in a certain sense the dual of variable elimination; but on many problems, e.g., SAT, it is vastly superior to variable elimination in practice. Motivated by this we investigate the application of backtracking search to the problem of Bayesian inference (Bayes). We show that natural generalizations of known techniques allow backtracking search to achieve performance guarantees similar to standard algorithms for Bayes, and that there exist problems on which backtracking can in fact do much better. We also demonstrate that these ideas can be applied to implement a Bayesian inference engine whose performance is competitive with standard algorithms. Since backtracking search can very naturally take advantage of context specific structure, the potential exists for performance superior to standard algorithms on many problems.


Policy-contingent abstraction for robust robot control

arXiv.org Artificial Intelligence

This paper presents a scalable control algorithm that enables a deployed mobile robot system to make high-level decisions under full consideration of its probabilistic belief. Our approach is based on insights from the rich literature of hierarchical controllers and hierarchical MDPs. The resulting controller has been successfully deployed in a nursing facility near Pittsburgh, PA. To the best of our knowledge, this work is a unique instance of applying POMDPs to high-level robotic control problems.


A Simple Insight into Iterative Belief Propagation's Success

arXiv.org Artificial Intelligence

In Non - ergodic belief networks the posterior belief OF many queries given evidence may become zero.The paper shows that WHEN belief propagation IS applied iteratively OVER arbitrary networks(the so called, iterative OR loopy belief propagation(IBP)) it IS identical TO an arc - consistency algorithm relative TO zero - belief queries(namely assessing zero posterior probabilities). This implies that zero - belief conclusions derived BY belief propagation converge AND are sound.More importantly it suggests that the inference power OF IBP IS AS strong AND AS weak, AS that OF arc - consistency.This allows the synthesis OF belief networks FOR which belief propagation IS useless ON one hand, AND focuses the investigation OF classes OF belief network FOR which belief propagation may be zero - complete.Finally, ALL the above conclusions apply also TO Generalized belief propagation algorithms that extend loopy belief propagation AND allow a crisper understanding OF their power.