AAAI Conferences
Tachmazidis
We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.
Ma
Belief revision studies strategies about how agents revise their belief states when receiving new evidence. Both in classical belief revision and in epistemic revision, a new input is either in the form of a (weighted) propositional formula or a total pre-order (where the total pre-order is considered as a whole). However, in some real-world applications, a new input can be a partial pre-order where each unit that constitutes the partial pre-order is important and should be considered individually. To address this issue, in this paper, we study how a partial pre-order representing the prior epistemic state can be revised by another partial pre-order (the new input) from a different perspective, where the revision is conducted recursively on the individual units of partial pre-orders. We propose different revision operators (rules), dubbed the extension, match, inner and outer revision operators, from different revision points of view. We also analyze several properties for these operators.
Huang
Within the recently proposed Universal Booleanization framework, we consider the Cumulative constraint, for which the original Boolean encoding proves ineffective, and present a new Boolean encoding that causes the SAT solver to simulate, largely, the search strategy used by some of the best-performing native methods. Apart from providing motivation for future research in a similar direction, we obtain a significantly enhanced version of Universal Booleanization for problems containing Cumulative constraints.
Giordano
Temporal logics can be used in reasoning about actions for specifying constraints on domain descriptions and temporal properties to be verified. In this paper, we exploit Bounded Model Checking (BMC) techniques in the verification of Dynamic Linear Time Temporal Logic (DLTL) properties of an action theory, which is formulated in a temporal extension of Answer Set Programming (ASP). To achieve completeness, we propose an approach to BMC which exploits the Buechi automaton construction while searching for a counterexample. We provide an encoding in ASP of the temporal action domain and of Bounded Model Checking of DLTL formulas.
Gebser
The advance of Internet and Sensor technology has brought about new challenges evoked by the emergence of continuous data streams. While existing data-stream management systems allow for high-throughput stream processing, they lack complex reasoning capacities. We address this shortcoming and elaborate upon an approach to knowledge-intense stream reasoning based on Answer Set Programming (ASP). The emphasis thus shifts from rapid data processing to complex reasoning. To accommodate this in ASP, we develop new techniques that allow us to formulate problem encodings dealing with emerging as well as expiring data in a seamless way. We thus propose novel language constructs and modeling techniques for specifying and reasoning with time-decaying logic programs.
Feier
The paper introduces a worst-case optimal tableau algorithm for reasoning with Forest Logic Programs, a decidable fragment of Open Answer Set Programming. FoLPs are a useful device for tight integration of the Description Logic and the Logic Programming worlds: reasoning with the DL SHOQ can be simulated within the fragment. The algorithm reuses a knowledge compilation technique previously introduced, but improves on previous results by decreasing the worst-case running time with one exponential level. The decrease in complexity is due to the usage in conjunction of a new redundancy and of a new caching rule.
Everaere
Belief merging aims at extracting a coherent and informative view from a set of belief bases. A first requirement for belief merging operators is to obey basic rationality conditions. Another expected property is to preserve as much information as possible from the input bases. In this paper, we show how new merging operators, called compositional operators, can be defined from existing ones. Such operators aim at offering a higher discriminative power than the merging operators on which they are based, without leading to a complexity shift or losing rationality postulates. We identify some sufficient conditions for ensuring that rationality is fully preserved by composition.
Craven
We describe the application of assumption-based argumentation (ABA) to a domain of medical knowledge derived from clinical trials of drugs for breast cancer. We adapt an algorithm for calculating the admissible semantics for ABA frameworks to take account of preferences and describe a prototype implementation which uses variant-based parallel computation to improve the efficiency of query answering.
Coste-Marquis
Recently, (Dunne et al. 2009; 2011) have suggested to weight attacks within Dung's abstract argumentation frameworks, and introduced the concept of WAF (Weighted Argumentation Framework). However, they use WAFs in a very specific way for relaxing attacks. The aim of this paper is to explore ways to take advantage of attacks weights within an argumentation process. Two different approaches are considered: The first one extends the proposal by (Dunne et al. 2011) and accounts for other aggregation functions than sum in the objective of relaxing attacks. The second one shows how weights can be exploited to strengthen the usual notion of defence, leading to new concepts of extensions.
Cohn
Successful analysis of video data requires an integration of techniques from KR, Computer Vision, and Machine Learning. Being able to detect and to track objects as well as extracting their changing spatial relations with other objects is one approach to describing and detecting events. Different kinds of spatial relations are important, including topology, direction, size, and distance between objects as well as changes of those relations over time. Typically these kinds of relations are treated separately, which makes it difficult to integrate all the extracted spatial information. We present a uniform and comprehensive spatial representation of moving objects that includes all the above spatial/temporal aspects, analyse different properties of this representation and demonstrate that it is suitable for video analysis.