Goto

Collaborating Authors

 Vienna University of Technology


Faber

AAAI Conferences

We introduce the framework of qualitative optimization problems (or, simply, optimization problems) to represent preference theories. The formalism uses separate modules to describe the space of outcomes to be compared (the generator) and the preferences on outcomes (the selector). We consider two types of optimization problems. They differ in the way the generator, which we model by a propositional theory, is interpreted: by the standard propositional logic semantics, and by the equilibrium-model (answer-set) semantics. Under the latter interpretation of generators, optimization problems directly generalize answer-set optimization programs proposed previously. We study strong equivalence of optimization problems, which guarantees their interchangeability within any larger context. We characterize several versions of strong equivalence obtained by restricting the class of optimization problems that can be used as extensions and establish the complexity of associated reasoning tasks. Understanding strong equivalence is essential for modular representation of optimization problems and rewriting techniques to simplify them without changing their inherent properties.


Summary Report of the Second International Competition on Computational Models of Argumentation

AI Magazine

One of NIST's research areas has been the quantification of Each team's system is faced with challenges such as The goal of ARIAC is to solidify the shown in figure 1. The organizers chose kitting field of robot agility, while also progressing the state because of its similarity to assembly. Teams were tasked with assembling a robotic system's (robot, controller, and sensors) ability kit both from bins of stationary parts and from a to respond to a dynamic environment. After the robotic system finished dynamic response includes handling errors like the kit, the kit was placed on an autonomous guided dropped parts or responding to changes in orders, all vehicle (AGV) and taken away. Teams were faced with such challenges as forced The competition addresses the aspect of robot dropped parts and in-process order changes.


Algorithms and Conditional Lower Bounds for Planning Problems

AAAI Conferences

We consider planning problems for graphs, Markov decision processes (MDPs), and games on graphs. While graphs represent the most basic planning model, MDPs represent interaction with nature and games on graphs represent interaction with an adversarial environment.We consider two planning problems where there are k different target sets, and the problems are as follows: (a) the coverage problem asks whether there is a plan for each individual target set, and (b) the sequential target reachability problem asks whether the targets can be reached in sequence. For the coverage problem, we present a linear-time algorithm for graphs, and quadratic conditional lower bound for MDPs and games on graphs.For the sequential target problem, we present a linear-time algorithm for graphs, a sub-quadratic algorithm for MDPs, and a quadratic conditional lower bound for games on graphs.Our results with conditional lower bounds establish (i) model-separation results showing that for the coverage problem MDPs and games on graphs are harder than graphs and for the sequential reachability problem games on graphs are harder than MDPs and graphs;and (ii) objective-separation results showing that for MDPs the coverage problem is harder than the sequential target problem.


Evaluating the Performance of Presumed Payoff Perfect Information Monte Carlo Sampling Against Optimal Strategies

AAAI Conferences

A very recent algorithm shows search of games of imperfect information has been around how both theoretical problems can be fixed (Lisý, Lanctot, for many years. The approach is appealing, for a number of and Bowling 2015), but has yet to be applied to large games reasons: it allows the usage of well-known methods from typically used for search. More recently overestimation of perfect information games, its complexity is magnitudes MAX's knowledge is also dealt with in the field of general lower than the problem of weakly solving a game in the game play (Schofield, Cerexhe, and Thielscher 2013). To the sense of game theory, it can be used in a justin-time manner best of our knowledge, all literature on the deficiencies of (no precalculation phase needed) even for games with PIMC concentrates on the overestimation of MAX's knowledge.


An Extension-Based Approach to Belief Revision in Abstract Argumentation

AAAI Conferences

Argumentation is an inherently dynamic process. Given that argumentation can be viewed as a process as well Consequently, recent years have witnessed tremendous as a product, recent years have seen an increasing number of research efforts towards an understanding of studies on different problems in the dynamics of argumentation how the seminal AGM theory of belief change can frameworks [Baumann, 2012; Bisquert et al., 2011; 2013; be applied to argumentation, in particular for Dung's Boella et al., 2009; Booth et al., 2013; Cayrol et al., 2010; abstract argumentation frameworks (AFs). However, Doutre et al., 2014; Kontarinis et al., 2013; Krümpelmann et none of the attempts has yet succeeded in handling al., 2012; Nouioua and Würbel, 2014; Sakama, 2014]. The the natural situation where the revision of an AF is problem we tackle here is how to revise an AF when some new guaranteed to be representable by an AF as well.


LARS: A Logic-Based Framework for Analyzing Reasoning over Streams

AAAI Conferences

The recent rise of smart applications has drawn interest to logical reasoning over data streams. Different query languages and stream processing/reasoning engines were proposed. However, due to a lack of theoretical foundations, the expressivity and semantics of these diverse approaches were only informally discussed. Towards clear specifications and means for analytic study, a formal framework is needed to characterize their semantics in precise terms. We present LARS, a Logic-based framework for Analyzing Reasoning over Streams, i.e., a rule-based formalism with a novel window operator providing a flexible mechanism to represent views on streaming data. We establish complexity results for central reasoning tasks and show how the prominent Continuous Query Language (CQL) can be captured. Moreover, the relation between LARS and ETALIS, a system for complex event processing is discussed. We thus demonstrate the capability of LARS to serve as the desired formal foundation for expressing and analyzing different semantic approaches to stream processing/reasoning and engines.


Variable-Deletion Backdoors to Planning

AAAI Conferences

Backdoors are a powerful tool to obtain efficient algorithms for hard problems. Recently, two new notions of backdoors to planning were introduced. However, for one of the new notions (i.e., variable-deletion) only hardness results are known so far. In this work we improve the situation by defining a new type of variable-deletion backdoors based on the extended causal graph of a planning instance. For this notion of backdoors several fixed-parameter tractable algorithms are identified. Furthermore, we explore the capabilities of polynomial time preprocessing, i.e., we check whether there exists a polynomial kernel. Our results also show the close connection between planning and verification problems such as Vector Addition System with States (VASS).


Efficient Extraction of QBF (Counter)models from Long-Distance Resolution Proofs

AAAI Conferences

Many computer science problems can be naturally and compactly expressed using quantified Boolean formulas (QBFs). Evaluating thetruth or falsity of a QBF is an important task, and constructing the corresponding model or countermodel can be as important and sometimes even more useful in practice. Modern search and learning based QBF solvers rely fundamentally on resolution and can be instrumented to produce resolution proofs, from which in turn Skolem-function models and Herbrand-function countermodels can be extracted. These (counter)models are the key enabler of various applications. Not until recently the superiority of long-distanceresolution (LQ-resolution) to short-distance resolution(Q-resolution) was demonstrated. While a polynomial algorithm exists for (counter)model extraction from Q-resolution proofs, it remains open whether it exists forLQ-resolution proofs. This paper settles this open problem affirmatively by constructing a linear-time extraction procedure. Experimental results show the distinct benefits of the proposed method in extracting high quality certificates from some LQ-resolution proofs that are not obtainable from Q-resolution proofs.


From Classical to Consistent Query Answering under Existential Rules

AAAI Conferences

Querying inconsistent ontologies is an intriguing new problem that gave rise to a flourishing research activity in the description logic (DL) community. The computational complexity of consistent query answering under the main DLs is rather well understood; however, little is known about existential rules. The goal of the current work is to perform an in-depth analysis of the complexity of consistent query answering under the main decidable classes of existential rules enriched with negative constraints. Our investigation focuses on one of the most prominent inconsistency-tolerant semantics, namely, the AR semantics. We establish a generic complexity result, which demonstrates the tight connection between classical and consistent query answering. This result allows us to obtain in a uniform way a relatively complete picture of the complexity of our problem.


The Complexity of Recognizing Incomplete Single-Crossing Preferences

AAAI Conferences

We study the complexity of deciding if a given profile of incomplete votes (i.e., a profile of partial orders over a given set of alternatives) can be extended to a single-crossing profile of complete votes (total orders). This problem models settings where we have partial knowledge regarding voters' preferences and we would like to understand whether the given preference profile may be single-crossing. We show that this problem admits a polynomial-time algorithm when the order of votes is fixed and the input profile consists of top orders, but becomes NP-complete if we are allowed to permute the votes and the input profile consists of weak orders or independent-pairs orders. Also, we identify a number of practical special cases of both problems that admit polynomial-time algorithms.