Goto

Collaborating Authors

 Nonmonotonic Logic


Temporal Many-valued Conditional Logics: a Preliminary Report

arXiv.org Artificial Intelligence

In this paper we propose a many-valued temporal conditional logic. We start from a many-valued logic with typicality, and extend it with the temporal operators of the Linear Time Temporal Logic (LTL), thus providing a formalism which is able to capture the dynamics of a system, trough strict and defeasible temporal properties. We also consider an instantiation of the formalism for gradual argumentation.


A Primer for Preferential Non-Monotonic Propositional Team Logics

arXiv.org Artificial Intelligence

This paper considers KLM-style preferential non-monotonic reasoning in the setting of propositional team semantics. We show that team-based propositional logics naturally give rise to cumulative non-monotonic entailment relations. Motivated by the non-classical interpretation of disjunction in team semantics, we give a precise characterization for preferential models for propositional dependence logic satisfying all of System P postulates. Furthermore, we show how classical entailment and dependence logic entailment can be expressed in terms of non-trivial preferential models.


Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems

arXiv.org Artificial Intelligence

A wide variety of nonmonotonic semantics can be expressed as approximators defined under AFT (Approximation Fixpoint Theory). Using traditional AFT theory, it is not possible to define approximators that rely on information computed in previous iterations of stable revision. However, this information is rich for semantics that incorporate classical negation into nonmonotonic reasoning. In this work, we introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics. We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.


Learning Assumption-based Argumentation Frameworks

arXiv.org Artificial Intelligence

We propose a novel approach to logic-based learning which generates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. These ABA frameworks can be mapped onto logic programs with negation as failure that may be non-stratified. Whereas existing argumentation-based methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformation rules, including some adapted from logic program transformation rules (notably folding) as well as others, such as rote learning and assumption introduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we also propose a variant that handles the non-stratified case. We illustrate the benefits of our approach with a number of examples, which show that, on one hand, we are able to easily reconstruct other logic-based learning approaches and, on the other hand, we can work out in a very simple and natural way problems that seem to be hard for existing techniques.


On Trivalent Logics, Compound Conditionals, and Probabilistic Deduction Theorems

arXiv.org Artificial Intelligence

In this paper we recall some results for conditional events, compound conditionals, conditional random quantities, p-consistency, and p-entailment. Then, we show the equivalence between bets on conditionals and conditional bets, by reviewing de Finetti's trivalent analysis of conditionals. But our approach goes beyond de Finetti's early trivalent logical analysis and is based on his later ideas, aiming to take his proposals to a higher level. We examine two recent articles that explore trivalent logics for conditionals and their definitions of logical validity and compare them with our approach to compound conditionals. We prove a Probabilistic Deduction Theorem for conditional events. After that, we study some probabilistic deduction theorems, by presenting several examples. We focus on iterated conditionals and the invalidity of the Import-Export principle in the light of our Probabilistic Deduction Theorem. We use the inference from a disjunction, "$A$ or $B$", to the conditional,"if not-$A$ then $B$", as an example to show the invalidity of the Import-Export principle. We also introduce a General Import-Export principle and we illustrate it by examining some p-valid inference rules of System P. Finally, we briefly discuss some related work relevant to AI.


Deontic Meta-Rules

arXiv.org Artificial Intelligence

The use of meta-rules in logic, i.e., rules whose content includes other rules, has recently gained attention in the setting of non-monotonic reasoning: a first logical formalisation and efficient algorithms to compute the (meta)-extensions of such theories were proposed in Olivieri et al (2021) This work extends such a logical framework by considering the deontic aspect. The resulting logic will not just be able to model policies but also tackle well-known aspects that occur in numerous legal systems. The use of Defeasible Logic (DL) to model meta-rules in the application area we just alluded to has been investigated. Within this line of research, the study mentioned above was not focusing on the general computational properties of meta-rules. This study fills this gap with two major contributions. First, we introduce and formalise two variants of Defeasible Deontic Logic with Meta-Rules to represent (1) defeasible meta-theories with deontic modalities, and (2) two different types of conflicts among rules: Simple Conflict Defeasible Deontic Logic, and Cautious Conflict Defeasible Deontic Logic. Second, we advance efficient algorithms to compute the extensions for both variants.


On Establishing Robust Consistency in Answer Set Programs

arXiv.org Artificial Intelligence

Answer set programs used in real-world applications often require that the program is usable with different input data. This, however, can often lead to contradictory statements and consequently to an inconsistent program. Causes for potential contradictions in a program are conflicting rules. In this paper, we show how to ensure that a program $\mathcal{P}$ remains non-contradictory given any allowed set of such input data. For that, we introduce the notion of conflict-resolving $\lambda$- extensions. A conflict-resolving $\lambda$-extension for a conflicting rule $r$ is a set $\lambda$ of (default) literals such that extending the body of $r$ by $\lambda$ resolves all conflicts of $r$ at once. We investigate the properties that suitable $\lambda$-extensions should possess and building on that, we develop a strategy to compute all such conflict-resolving $\lambda$-extensions for each conflicting rule in $\mathcal{P}$. We show that by implementing a conflict resolution process that successively resolves conflicts using $\lambda$-extensions eventually yields a program that remains non-contradictory given any allowed set of input data.


Tachmazidis

AAAI Conferences

We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.


Lakemeyer

AAAI Conferences

Only-knowing was originally introduced by Levesque to capture the beliefs of an agent in the sense that its knowledge base is all the agent knows. When a knowledge base contains defaults Levesque also showed an exact correspondence between only-knowing and autoepistemic logic. Later these results were extended by Lakemeyer and Levesque to also capture a variant of autoepistemic logic proposed by Konolige and Reiter's default logic. One of the benefits of such an approach is that various nonmonotonic formalisms can be compared within a single monotonic logic leading, among other things, to the first axiom system for default logic. In this paper, we will bring another large class of nonmonotonic systems, which were first studied by McDermott and Doyle, into the only-knowing fold. Among other things, we will provide the first possible-world semantics for such systems, providing a new perspective on the nature of modal approaches to nonmonotonic reasoning.


Wilhelm

AAAI Conferences

The principle of maximum entropy (MaxEnt) constitutes a powerful formalism for nonmonotonic reasoning based on probabilistic conditionals. Conditionals are defeasible rules which allow one to express that certain subclasses of some broader concept behave exceptional. In the (common) probabilistic semantics of conditional statements, these exceptions are formalized only implicitly: The conditional (B A)[p] expresses that if A holds, then B is typically true, namely with probability p, but without explicitly talking about the subclass of A for which B does not hold. There is no possibility to express within the conditional that a subclass C of A is excluded from the inference to B because one is unaware of the probability of B given C. In this paper, we apply the concept of default negation to probabilistic MaxEnt reasoning in order to formalize this kind of unawareness and propose a context-based inference formalism. We exemplify the usefulness of this inference relation, and show that it satisfies basic formal properties of probabilistic reasoning.