Plotting

Belief Revision: Instructional Materials


Visions of a generalized probability theory

arXiv.org Artificial Intelligence

In this Book we argue that the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. From a methodological point of view, novel theoretical results concerning the geometric and algebraic properties of belief functions as mathematical objects are illustrated and discussed in Part II, with a focus on both a perspective 'geometric approach' to uncertainty and an algebraic solution to the issue of conflicting evidence. In Part III we show how these theoretical developments arise from important computer vision problems (such as articulated object tracking, data association and object pose estimation) to which, in turn, the evidential formalism is able to provide interesting new solutions. Finally, some initial steps towards a generalization of the notion of total probability to belief functions are taken, in the perspective of endowing the theory of evidence with a complete battery of estimation and inference tools to the benefit of all scientists and practitioners.


Belief Integration and Source Reliability Assessment

Journal of Artificial Intelligence Research

Merging beliefs requires the plausibility of the sources of the information to be merged. They are typically assumed equally reliable when nothing suggests otherwise. A recent line of research has spun from the idea of deriving this information from the revision process itself. In particular, the history of previous revisions and previous merging examples provide information for performing subsequent merging operations. Yet, no examples or previous revisions may be available. In spite of the apparent lack of information, something can still be inferred by a try-and-check approach: a relative reliability ordering is assumed, the sources are integrated according to it and the result is compared with the original information. The final check may contradict the original ordering, like when the result of merging implies the negation of a formula coming from a source initially assumed reliable, or it implies a formula coming from a source assumed unreliable. In such cases, the reliability ordering assumed in the first place can be excluded from consideration. Such a scenario is proved real under the classifications of source reliability and definitions of belief integration considered in this article: sources divided in two, three or multiple reliability classes; integration is mostly by maximal consistent subsets but also weighted distance is considered.


Preference-Based Inconsistency Management in Multi-Context Systems

Journal of Artificial Intelligence Research

Multi-Context Systems (MCS) are a powerful framework for interlinking possibly heterogeneous, autonomous knowledge bases, where information can be exchanged among knowledge bases by designated bridge rules with negation as failure. An acknowledged issue with MCS is inconsistency that arises due to the information exchange. To remedy this problem, inconsistency removal has been proposed in terms of repairs, which modify bridge rules based on suitable notions for diagnosis of inconsistency. In general, multiple diagnoses and repairs do exist; this leaves the user, who arguably may oversee the inconsistency removal, with the task of selecting some repair among all possible ones. To aid in this regard, we extend the MCS framework with preference information for diagnoses, such that undesired diagnoses are filtered out and diagnoses that are most preferred according to a preference ordering are selected. We consider preference information at a generic level and develop meta-reasoning techniques on diagnoses in MCS that can be exploited to reduce preference-based selection of diagnoses to computing ordinary subset-minimal diagnoses in an extended MCS. We describe two meta-reasoning encodings for preference orders: the first is conceptually simple but may incur an exponential blowup. The second is increasing only linearly in size and based on duplicating the original MCS. The latter requires nondeterministic guessing if a subset-minimal among all most preferred diagnoses should be computed. However, a complexity analysis of diagnoses shows that this is worst-case optimal, and that in general, preferred diagnoses have the same complexity as subset-minimal ordinary diagnoses. Furthermore, (subset-minimal) filtered diagnoses and (subset-minimal) ordinary diagnoses also have the same complexity.


Logical Formalizations of Commonsense Reasoning: A Survey

Journal of Artificial Intelligence Research

Commonsense reasoning is in principle a central problem in artificial intelligence, but it is a very difficult one. One approach that has been pursued since the earliest days of the field has been to encode commonsense knowledge as statements in a logic-based representation language and to implement commonsense reasoning as some form of logical inference. This paper surveys the use of logic-based representations of commonsense knowledge in artificial intelligence research.



MACHINE INTELLIGENCE 11

AI Classics

In this paper we will be concerned with such reasoning in its most general form, that is, in inferences that are defeasible: given more information, we may retract them. The purpose of this paper is to introduce a form of non-monotonic inference based on the notion of a partial model of the world. We take partial models to reflect our partial knowledge of the true state of affairs. We then define non-monotonic inference as the process of filling in unknown parts of the model with conjectures: statements that could turn out to be false, given more complete knowledge. To take a standard example from default reasoning: since most birds can fly, if Tweety is a bird it is reasonable to assume that she can fly, at least in the absence of any information to the contrary. We thus have some justification for filling in our partial picture of the world with this conjecture. If our knowledge includes the fact that Tweety is an ostrich, then no such justification exists, and the conjecture must be retracted.


Z.til

AI Classics

This paper describes some work on automatically generating finite counterexamples in topology, and the use of counterexamples to speed up proof discovery in intermediate analysis, and gives some examples theorems where human provers are aided in proof discovery by the use of examples.


On Action Theory Change

Journal of Artificial Intelligence Research

As historically acknowledged in the Reasoning about Actions and Change community, intuitiveness of a logical domain description cannot be fully automated. Moreover, like any other logical theory, action theories may also evolve, and thus knowledge engineers need revision methods to help in accommodating new incoming information about the behavior of actions in an adequate manner. The present work is about changing action domain descriptions in multimodal logic. Its contribution is threefold: first we revisit the semantics of action theory contraction proposed in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness with respect to our semantics for those action theories that satisfy a principle of modularity investigated in previous work. Since modularity can be ensured for every action theory and, as we show here, needs to be computed at most once during the evolution of a domain description, it does not represent a limitation at all to the method here studied. Finally we state AGM-like postulates for action theory contraction and assess the behavior of our operators with respect to them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.