Belief Revision
Language Splitting and Relevance-Based Belief Change in Horn Logic
Wu, Maonia (Guizhou University) | Zhang, Dongmo (University of Western Sydney) | Zhang, Mingyi (Guizhou Academy of Sciences)
This paper presents a framework for relevance-based belief change in propositional Horn logic. We firstly establish a parallel interpolation theorem for Horn logic and show that Parikh's Finest Splitting Theorem holds with Horn formulae. By reformulating Parikh's relevance criterion in the setting of Horn belief change, we construct a relevance-based partial meet Horn contraction operator and provide a representation theorem for the operator. Interestingly, we find that this contraction operator can be fully characterised by Delgrande and Wassermann's postulates for partial meet Horn contraction as well as Parikh's relevance postulate without requiring any change on the postulates, which is qualitatively different from the case in classical propositional logic.
Mean Field Inference in Dependency Networks: An Empirical Study
Lowd, Daniel (University of Oregon) | Shamaei, Arash (University of Oregon)
Dependency networks are a compelling alternative to Bayesian networks for learning joint probability distributions from data and using them to compute probabilities. A dependency network consists of a set of conditional probability distributions, each representing the probability of a single variable given its Markov blanket. Running Gibbs sampling with these conditional distributions produces a joint distribution that can be used to answer queries, but suffers from the traditional slowness of sampling-based inference. In this paper, we observe that the mean field update equation can be applied to dependency networks, even though the conditional probability distributions may be inconsistent with each other. In experiments with learning and inference on 12 datasets, we demonstrate that mean field inference in dependency networks offers similar accuracy to Gibbs sampling but with orders of magnitude improvements in speed. Compared to Bayesian networks learned on the same data, dependency networks offer higher accuracy at greater amounts of evidence. Furthermore, mean field inference is consistently more accurate in dependency networks than in Bayesian networks learned on the same data.
On the Effectiveness of Belief State Representation in Contingent Planning
To, Son Thanh (New Mexico State University) | Son, Tran Cao (New Mexico State University) | Pontelli, Enrico (New Mexico State University)
This work proposes new approaches to contingent planning using alternative belief state representations extended from those in conformant planning and a new AND/OR forward search algorithm, called PrAO, for contingent solutions. Each representation was implemented in a new contingent planner. The important role of belief state representation has been confirmed by the fact that our planners all outperform other stateof- the-art planners on most benchmarks and the comparison of their performances varies across all the benchmarks even using the same search algorithm PrAO and same unsophisticated heuristic scheme. The work identifies the properties of each representation method that affect the performance.
Conjunctive Representations in Contingent Planning: Prime Implicates Versus Minimal CNF Formula
To, Son Thanh (New Mexico State University) | Son, Tran Cao (New Mexico State University) | Pontelli, Enrico (New Mexico State University)
This paper compares in depth the effectiveness of two conjunctive belief state representations in contingent planning: prime implicates and minimal CNF, a compact form of CNF formulae, which were initially proposed in conformant planning research (To et al. 2010a; 2010b). Similar to the development of the contingent planner CNFct for minimal CNF (To et al. 2011b), the present paper extends the progression function for the prime implicate representation in (To et al. 2010b) for computing successor belief states in the presence of incomplete information to handle non-deterministic and sensing actions required in contingent planning. The idea was instantiated in a new contingent planner, called PIct, using the same AND/OR search algorithm and heuristic function as those for CNFct. The experiments show that, like CNFct, PIct performs very well in a wide range of benchmarks. The study investigates the advantages and disadvantages of the two planners and identifies the properties of each representation method that affect the performance.
Large Scale Diagnosis Using Associations between System Outputs and Components
Guo, Ting (Jilin University) | Li, Zhanshan (Jilin University) | Guo, Ruizhi (Jilin University) | Zhu, Xingquan (University of Technology, Sydney)
Model-based diagnosis (MBD) uses an abstraction of system to diagnose possible faulty functions of an underlying system. To improve the solution efficiency for multi-fault diagnosis problems, especially for large scale systems, this paper proposes a method to induce reasonable diagnosis solutions, under coarse diagnosis, by using the relationships between system outputs and components. Compared to existing diagnosis methods, the proposed framework only needs to consider associations between outputs and components by using an assumption-based truth maintenance system (ATMS) [de Kleer 1986] to obtain correlation components for every output node. As a result, our method significantly reduces the number of variables required for model diagnosis, which makes it suitable for large scale circuit systems.
An Architectural Approach to Ensuring Consistency in Hierarchical Execution
Hierarchical task decomposition is a method used in many agent systems to organize agent knowledge. This work shows how the combination of a hierarchy and persistent assertions of knowledge can lead to difficulty in maintaining logical consistency in asserted knowledge. We explore the problematic consequences of persistent assumptions in the reasoning process and introduce novel potential solutions. Having implemented one of the possible solutions, Dynamic Hierarchical Justification, its effectiveness is demonstrated with an empirical analysis.
A Knowledge Compilation Map
We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages.
Space Efficiency of Propositional Knowledge Representation Formalisms
Cadoli, M., Donini, F. M., Liberatore, P., Schaerf, M.
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge A, is the size of the shortest formula of F that represents A. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
Prime Normal Forms in Belief Merging
Marchi, Jerusa (Universidade Federal de Santa Catarina) | Perrussel, Laurent (Institut de Recherche en Informatique de Toulouse)
The aim of Belief Merging is to aggregate possibly conflicting pieces of information issued from different sources. The quality of the resulting set is usually considered in terms of a closeness criterion between the resulting belief set and the initial belief sets. The notion of distance between belief sets is thus a crucial issue when we face the merging problem. The aim of this paper is twofold: introducing a syntactical way to calculate distances and proposing the use of a distance based on prime implicants and prime implicates that considers the importance of each propositional symbol in the belief set.
Possible Worlds and Possible Meanings: A Semantics for the Interpretation of Vague Languages
Bennett, Brandon ( University of Leeds )
The paper develops a formal model for interpreting vague languages based on a variant of "supervaluation" semantics. Two modes of semantic variability are modelled, corresponding to different aspects of vagueness: one mode arises where there can be multiple definitions of a term; and the other relates to the threshold of applicability of a vague term with respect to the magnitude of relevant observable values. The truth of a proposition depends on both the possible world and the "precisification" with respect to which it is evaluated. Structures representing possible worlds and precisifications are both specified in terms of primitive functions representing observable measurements, so that the semantics is grounded upon an underlying theory of physical reality. On the basis of this semantics, the acceptability of a proposition to an agent is characterised in terms of a combination of agent's beliefs about the world and their attitude to admissible interpretations of vague predicates.