Belief Revision


Learning Embeddings of Directed Networks with Text-Associated Nodes---with Applications in Software Package Dependency Networks

arXiv.org Machine Learning

A network embedding consists of a vector representation for each node in the network. Network embeddings have shown their usefulness in node classification and visualization in many real-world application domains, such as social networks and web networks. While directed networks with text associated with each node, such as citation networks and software package dependency networks, are commonplace, to the best of our knowledge, their embeddings have not been specifically studied. In this paper, we create PCTADW-1 and PCTADW-2, two algorithms based on NNs that learn embeddings of directed networks with text associated with each node. We create two new labeled directed networks with text-associated node: The package dependency networks in two popular GNU/Linux distributions, Debian and Fedora. We experimentally demonstrate that the embeddings produced by our NNs resulted in node classification with better quality than those of various baselines on these two networks. We observe that there exist systematic presence of analogies (similar to those in word embeddings) in the network embeddings of software package dependency networks. To the best of our knowledge, this is the first time that such a systematic presence of analogies is observed in network and document embeddings. This may potentially open up a new venue for better understanding networks and documents algorithmically using their embeddings as well as for better human understanding of network and document embeddings.



Imaginary Kinematics

arXiv.org Artificial Intelligence

We introduce a novel class of adjustment rules for a collection of beliefs. This is an extension of Lewis' imaging to absorb probabilistic evidence in generalized settings. Unlike standard tools for belief revision, our proposal may be used when information is inconsistent with an agent's belief base. We show that the functionals we introduce are based on the imaginary counterpart of probability kinematics for standard belief revision, and prove that, under certain conditions, all standard postulates for belief revision are satisfied.


On Strengthening the Logic of Iterated Belief Revision: Proper Ordinal Interval Operators

arXiv.org Artificial Intelligence

Darwiche and Pearl's seminal 1997 article outlined a number of baseline principles for a logic of iterated belief revision. These principles, the DP postulates, have been supplemented in a number of alternative ways. Most of the suggestions made have resulted in a form of `reductionism' that identifies belief states with orderings of worlds. However, this position has recently been criticised as being unacceptably strong. Other proposals, such as the popular principle (P), aka `Independence', characteristic of `admissible' revision operators, remain commendably more modest. In this paper, we supplement both the DP postulates and (P) with a number of novel conditions. While the DP postulates constrain the relation between a prior and a posterior conditional belief set, our new principles notably govern the relation between two posterior conditional belief sets obtained from a common prior by different revisions. We show that operators from the resulting family, which subsumes both lexicographic and restrained revision, can be represented as relating belief states that are associated with a `proper ordinal interval' (POI) assignment, a structure more fine-grained than a simple ordering of worlds. We close the paper by noting that these operators satisfy iterated versions of a large number of AGM era postulates, including Superexpansion, that are not sound for admissible operators in general.


Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting -- Application to Belief Update and Conservative Extension

arXiv.org Artificial Intelligence

Dependence is an important concept for many tasks in artificial intelligence. A task can be executed more efficiently by discarding something independent from the task. In this paper, we propose two novel notions of dependence in propositional logic: formula-formula dependence and formula forgetting. The first is a relation between formulas capturing whether a formula depends on another one, while the second is an operation that returns the strongest consequence independent of a formula. We also apply these two notions in two well-known issues: belief update and conservative extension. Firstly, we define a new update operator based on formula-formula dependence. Furthermore, we reduce conservative extension to formula forgetting.


Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

arXiv.org Artificial Intelligence

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations. For instance, a robot tasked with a search-and-rescue mission may be informed by the human that two victims are probably in the same room. An important question arises: how should we represent the robot's internal knowledge so that this information is correctly processed and combined with raw sensory information? In this paper, we provide an efficient belief state representation that dynamically selects an appropriate factoring, combining aspects of the belief when they are correlated through information and separating them when they are not. This strategy works in open domains, in which the set of possible objects is not known in advance, and provides significant improvements in inference time over a static factoring, leading to more efficient planning for complex partially observed tasks. We validate our approach experimentally in two open-domain planning problems: a 2D discrete gridworld task and a 3D continuous cooking task.


Analysis of Jeffrey's Rule of Conditioning in an Imprecise Probabilistic Setting

AAAI Conferences

While sets of probability measures and imprecise probabilities in general, are widely accepted as a powerful and unifying framework for handling uncertain and incomplete information, updating such belief sets with new uncertain inputs has not received enough attention. In this paper, we provide an analysis of Jeffrey's rule of conditioning for updating sets of probability measures with new information, possibly uncertain and imprecise, also expressed as sets of probability measures. The paper first provides properties for updating sets of probability measures in the spirit of Jeffrey's rule, then provides and analyses extensions of Jeffrey's rule to three main imprecise probability representations: i) finite sets of probability measures and ii) convex set of probability measures specified by extreme points. The proposed extensions capture the proposed postulates and recover the standard Jeffrey's rule in case where the updated set and the new input are single probability measures.


Modeling Belief Change on Epistemic States

AAAI Conferences

Belief revision always results in trusting new evidence, so it may admit an unreliable one and discard a more confident one. We therefore use belief change instead of belief revision to remedy this weakness. By introducing epistemic states, we take into account of the strength of evidence that influences the change of belief. In this paper, we present a set of postulates to characterize belief change by epistemic states and establish representation theorems to characterize those postulates. We show that from an epistemic state, a corresponding ordinal conditional function by Spohn can be derived and the result of combining two epistemic states is thus reduced to the result from combining two corresponding ordinal conditional functions proposed by Laverny and Lang. Furthermore, when reduced to the belief revision situation, we prove that our results induce all the Darwiche and Pearl's postulates.


Morphologic for knowledge dynamics: revision, fusion, abduction

arXiv.org Artificial Intelligence

Several tasks in artificial intelligence require to be able to find models about knowledge dynamics. They include belief revision, fusion and belief merging, and abduction. In this paper we exploit the algebraic framework of mathematical morphology in the context of propositional logic, and define operations such as dilation or erosion of a set of formulas. We derive concrete operators, based on a semantic approach, that have an intuitive interpretation and that are formally well behaved, to perform revision, fusion and abduction. Computation and tractability are addressed, and simple examples illustrate the typical results that can be obtained.


In Praise of Belief Bases: Doing Epistemic Logic Without Possible Worlds

AAAI Conferences

We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and a decidability result for our logic.