Plotting

 Country


Managing Change in Graph-Structured Data Using Description Logics

AAAI Conferences

In this paper we consider the setting of graph-structured data that evolves as a result of operations carried out by users or applications. We study different reasoning problems, which range from ensuring the satisfaction of a given set of integrity constraints after a given sequence of updates, to deciding the (non-)existence of a sequence of actions that would take the data to an (un)desirable state, starting either from a specific data instance or from an incomplete description of it. We consider a simple action language in which actions are finite sequences of insertions and deletions of nodes and labels, and use Description Logics for describing integrity constraints and (partial) states of the data. We then formalize the data management problems mentioned above as a static verification problem and several planning problems. We provide algorithms and tight complexity bounds for the formalized problems, both for an expressive DL and for a variant of DL-Lite.


ARIA: Asymmetry Resistant Instance Alignment

AAAI Conferences

We study the problem of instance alignment between knowledge bases (KBs). Existing approaches, exploiting the “symmetry” of structure and information across KBs, suffer in the presence of asymmetry, which is frequent as KBs are independently built. Specifically, we observe three types of asymmetries (in concepts, in features, and in structures). Our goal is to identify key techniques to reduce accuracy loss caused by each type of asymmetry, then design Asymmetry-Resistant Instance Alignment framework (ARIA). ARIA uses two-phased blocking methods considering concept and feature asymmetries, with a novel similarity measure overcoming structure asymmetry. Compared to a state-of-the-art method, ARIA increased precision by 19% and recall by 2%, and decreased processing time by more than 80% in matching large-scale real-life KBs.


A Constructive Argumentation Framework

AAAI Conferences

Dung's argumentation framework is an abstract framework based on a set of arguments and a binary attack relation defined over the set. One instantiation, among many others, of Dung's framework consists in constructing the arguments from a set of propositional logic formulas. Thus an argument is seen as a reason for or against the truth of a particular statement. Despite its advantages, the argumentation approach for inconsistency handling also has important shortcomings. More precisely, in some applications what one is interested in are not so much only the conclusions supported by the arguments but also the precise explications of such conclusions. We show that argumentation framework applied to classical logic formulas is not suitable to deal with this problem. On the other hand, intuitionistic logic appears to be a natural alternative candidate logic (instead of classical logic) to instantiate Dung's framework. We develop constructive argumentation framework. We show that intuitionistic logic offers nice and desirable properties of the arguments. We also provide a characterization of the arguments in this setting in terms of minimal inconsistent subsets when intuitionistic logic is embedded in the modal logic S4.


Large-Scale Analogical Reasoning

AAAI Conferences

Cognitive simulation of analogical processing can be used to answer comparison questions such as: What are the similarities and/or differences between A and B, for concepts A and B in a knowledge base (KB). Previous attempts to use a general-purpose analogical reasoner to answer such questions revealed three major problems: (a) the system presented too much information in the answer, and the salient similarity or difference was not highlighted; (b) analogical inference found some incorrect differences; and (c) some expected similarities were not found. The cause of these problems was primarily a lack of a well-curated KB and, and secondarily, algorithmic deficiencies. In this paper, relying on a well-curated biology KB, we present a specific implementation of comparison questions inspired by a general model of analogical reasoning. We present numerous examples of answers produced by the system and empirical data on answer quality to illustrate that we have addressed many of the problems of the previous system.


Exponential Deepening A* for Real-Time Agent-Centered Search

AAAI Conferences

In the Real-Time Agent-Centered Search (RTACS) problem,an agent has to arrive at a goal location while acting and reasoningin the physical world. Traditionally, RTACS problemsare solved by propagating and updating heuristic values ofstates visited by the agent. In existing RTACS algorithms theagent may revisit each state many times causing the entireprocedure to be quadratic in the state space. We study theIterative Deepening (ID) approach for solving RTACS andintroduce Exponential Deepening A* (EDA*), an RTACS algorithmwhere the threshold between successive Depth-Firstcalls is increased exponentially. EDA* is proven to hold aworst case bound that is linear in the state space. Experimentalresults supporting this bound are presented and demonstrateup to 10x reduction over existing RTACS solvers wrtdistance traveled, states expanded and CPU runtime.


The Computational Complexity of Structure-Based Causality

AAAI Conferences

Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two values) and \Sigma^P_2-complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed by Hopkins and Pearl. As we show, this modification has a nontrivial impact on the complexity of computing actual cause. To characterize the complexity, a new family D_k^P , k = 1,2,3,..., of complexity classes is introduced, which generalizes the class D^P introduced by Papadimitriou and Yannakakis (DP is just D^P_1). We show that the complexity of computing causality under the updated definition is D^P_2 -complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame. The complexity of determining the degree of responsibility and blame using the original definition of causality was completely characterized. Again, we show that changing the definition of causality affects the complexity, and completely characterize it using the updated definition.


Increasing VCG Revenue by Decreasing the Quality of Items

AAAI Conferences

The VCG mechanism is the standard method to incentivize bidders in combinatorial auctions to bid truthfully. Under the VCG mechanism, the auctioneer can sometimes increase revenue by “burning” items. We study this phenomenon in a setting where items are described by a number of attributes. The value of an attribute corresponds to a quality level, and bidders’ valuations are non-decreasing in the quality levels. In addition to burning items, we allow the auctioneer to present some of the attributes as lower quality than they actually are. We consider the following two revenue maximization problems under VCG: finding an optimal way to mark down items by reducing their quality levels, and finding an optimal set of items to burn. We study the effect of the following parameters on the computational complexity of these two problems: the number of attributes, the number of quality levels per attribute, and the complexity of the bidders’ valuation functions. Bidders have unit demand, so VCG’s outcome can be computed in polynomial time, and the valuation functions we consider are step functions that are non-decreasing with the quality levels. We prove that both problems are NP-hard even in the following three simple settings: a) four attributes, arbitrarily many quality levels per attribute, and single-step valuation functions, b) arbitrarily many attributes, two quality levels per attribute, and single-step valuation functions, and c) one attribute, arbitrarily many quality levels, and multi-step valuation functions. For the case where items have only one attribute, and every bidder has a single-step valuation (zero below some quality threshold), we show that both problems can be solved in polynomial-time using a dynamic programming approach. For this case, we also quantify how much better marking down is than item burning, and we compare the revenue of both approaches with computational experiments.


Incomplete Preferences in Single-Peaked Electorates

AAAI Conferences

Incomplete preferences are likely to arise in real-world preference aggregation and voting systems. This paper deals with determining whether an incomplete preference profile is single-peaked. This is essential information since many intractable voting problems become tractable for single-peaked profiles. We prove that for incomplete profiles the problem of determining single-peakedness is NP-complete. Despite this computational hardness result, we find four polynomial-time algorithms for reasonably restricted settings.


Lazy Defenders Are Almost Optimal against Diligent Attackers

AAAI Conferences

Most work building on the Stackelberg security games model assumes that the attacker can perfectly observe the defender's randomized assignment of resources to targets. This assumption has been challenged by recent papers, which designed tailor-made algorithms that compute optimal defender strategies for security games with limited surveillance. We analytically demonstrate that in zero-sum security games, lazy defenders, who simply keep optimizing against perfectly informed attackers, are almost optimal against diligent attackers, who go to the effort of gathering a reasonable number of observations. This result implies that, in some realistic situations, limited surveillance may not need to be explicitly addressed.


Parallel Materialisation of Datalog Programs in Centralised, Main-Memory RDF Systems

AAAI Conferences

We present a novel approach to parallel materialisation (i.e., fixpoint computation) of datalog programs in centralised, main-memory, multi-core RDF systems. Our approach comprises an algorithm that evenly distributes the workload to cores, and an RDF indexing data structure that supports efficient, 'mostly' lock-free parallel updates. Our empirical evaluation shows that our approach parallelises computation very well: with 16 physical cores, materialisation can be up to 13.9 times faster than with just one core.