Goto

Collaborating Authors

 Woltran, Stefan


Epistemic Logic Programs: Non-Ground and Counting Complexity

arXiv.org Artificial Intelligence

Answer Set Programming (ASP) is a prominent problem-modeling and solving framework, whose solutions are called answer sets. Epistemic logic programs (ELP) extend ASP to reason about all or some answer sets. Solutions to an ELP can be seen as consequences over multiple collections of answer sets, known as world views. While the complexity of propositional programs is well studied, the non-ground case remains open. This paper establishes the complexity of non-ground ELPs. We provide a comprehensive picture for well-known program fragments, which turns out to be complete for the class NEXPTIME with access to oracles up to \Sigma^P_2. In the quantitative setting, we establish complexity results for counting complexity beyond #EXP. To mitigate high complexity, we establish results in case of bounded predicate arity, reaching up to the fourth level of the polynomial hierarchy. Finally, we provide ETH-tight runtime results for the parameter treewidth, which has applications in quantitative reasoning, where we reason on (marginal) probabilities of epistemic literals.


A Unified View on Forgetting and Strong Equivalence Notions in Answer Set Programming

arXiv.org Artificial Intelligence

Answer Set Programming (ASP) is a prominent rule-based language for knowledge representation and reasoning with roots in logic programming and non-monotonic reasoning. The aim to capture the essence of removing (ir)relevant details in ASP programs led to the investigation of different notions, from strong persistence (SP) forgetting, to faithful abstractions, and, recently, strong simplifications, where the latter two can be seen as relaxed and strengthened notions of forgetting, respectively. Although it was observed that these notions are related, especially given that they have characterizations through the semantics for strong equivalence, it remained unclear whether they can be brought together. In this work, we bridge this gap by introducing a novel relativized equivalence notion, which is a relaxation of the recent simplification notion, that is able to capture all related notions from the literature. We provide necessary and sufficient conditions for relativized simplifiability, which shows that the challenging part is for when the context programs do not contain all the atoms to remove. We then introduce an operator that combines projection and a relaxation of (SP)-forgetting to obtain the relativized simplifications. We furthermore present complexity results that complete the overall picture.


Solving Projected Model Counting by Utilizing Treewidth and its Limits

arXiv.org Artificial Intelligence

In this paper, we introduce a novel algorithm to solve projected model counting (PMC). PMC asks to count solutions of a Boolean formula with respect to a given set of projection variables, where multiple solutions that are identical when restricted to the projection variables count as only one solution. Inspired by the observation that the so-called "treewidth" is one of the most prominent structural parameters, our algorithm utilizes small treewidth of the primal graph of the input instance. More precisely, it runs in time O(2^2k+4n2) where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, yielding asymptotically tight runtime bounds of our algorithm. While the algorithm above serves as a first theoretical upper bound and although it might be quite appealing for small values of k, unsurprisingly a naive implementation adhering to this runtime bound suffers already from instances of relatively small width. Therefore, we turn our attention to several measures in order to resolve this issue towards exploiting treewidth in practice: We present a technique called nested dynamic programming, where different levels of abstractions of the primal graph are used to (recursively) compute and refine tree decompositions of a given instance. Finally, we provide a nested dynamic programming algorithm and an implementation that relies on database technology for PMC and a prominent special case of PMC, namely model counting (#Sat). Experiments indicate that the advancements are promising, allowing us to solve instances of treewidth upper bounds beyond 200.


Recursion in Abstract Argumentation is Hard --- On the Complexity of Semantics Based on Weak Admissibility

Journal of Artificial Intelligence Research

We study the computational complexity of abstract argumentation semantics based onย  weak admissibility, a recently introduced concept to deal with arguments of self-defeatingย  nature. Our results reveal that semantics based on weak admissibility are of much higherย  complexity (under typical assumptions) compared to all argumentation semantics whichย  have been analysed in terms of complexity so far. In fact, we show PSPACE-completenessย  of all non-trivial standard decision problems for weak-admissible based semantics. We thenย  investigate potential tractable fragments and show that restricting the frameworks underย  consideration to certain graph-classes significantly reduces the complexity. We also showย  that weak-admissibility based extensions can be computed by dividing the given graph intoย  its strongly connected components (SCCs). This technique ensures that the bottleneckย  when computing extensions is the size of the largest SCC instead of the size of the graphย  itself and therefore contributes to the search for fixed-parameter tractable implementationsย  for reasoning with weak admissibility.ย 


Aspartix-V21

arXiv.org Artificial Intelligence

In this solver description we present ASPARTIX-V, in its 2021 edition, which participates in the International Competition on Computational Models of Argumentation (ICCMA) 2021. ASPARTIX-V is capable of solving all classical (static) reasoning tasks part of ICCMA'21 and extends the ASPARTIX system suite by incorporation of recent ASP language constructs (e.g. conditional literals), domain heuristics within ASP, and multi-shot methods. In this light ASPARTIX-V deviates from the traditional focus of ASPARTIX on monolithic approaches (i.e., one-shot solving via a single ASP encoding) to further enhance performance.


Utilizing Treewidth for Quantitative Reasoning on Epistemic Logic Programs

arXiv.org Artificial Intelligence

Extending the popular Answer Set Programming (ASP) paradigm by introspective reasoning capacities has received increasing interest within the last years. Particular attention is given to the formalism of epistemic logic programs (ELPs) where standard rules are equipped with modal operators which allow to express conditions on literals for being known or possible, i.e., contained in all or some answer sets, respectively. ELPs thus deliver multiple collections of answer sets, known as world views. Employing ELPs for reasoning problems so far has mainly been restricted to standard decision problems (complexity analysis) and enumeration (development of systems) of world views. In this paper, we take a next step and contribute to epistemic logic programming in two ways: First, we establish quantitative reasoning for ELPs, where the acceptance of a certain set of literals depends on the number (proportion) of world views that are compatible with the set. Second, we present a novel system that is capable of efficiently solving the underlying counting problems required to answer such quantitative reasoning problems. Our system exploits the graph-based measure treewidth and works by iteratively finding and refining (graph) abstractions of an ELP program. On top of these abstractions, we apply dynamic programming that is combined with utilizing existing search-based solvers like (e)clingo for hard combinatorial subproblems that appear during solving. It turns out that our approach is competitive with existing systems that were introduced recently. This work is under consideration for acceptance in TPLP.


Expressiveness of SETAFs and Support-Free ADFs under 3-valued Semantics

arXiv.org Artificial Intelligence

Generalizing the attack structure in argumentation frameworks (AFs) has been studied in different ways. Most prominently, the binary attack relation of Dung frameworks has been extended to the notion of collective attacks. The resulting formalism is often termed SETAFs. Another approach is provided via abstract dialectical frameworks (ADFs), where acceptance conditions specify the relation between arguments; restricting these conditions naturally allows for so-called support-free ADFs. The aim of the paper is to shed light on the relation between these two different approaches. To this end, we investigate and compare the expressiveness of SETAFs and support-free ADFs under the lens of 3-valued semantics. Our results show that it is only the presence of unsatisfiable acceptance conditions in support-free ADFs that discriminate the two approaches.


Design and Results of the Second International Competition on Computational Models of Argumentation

arXiv.org Artificial Intelligence

Within AI, several sub-fields are particularly relevant to - and benefit from - studies of argumentation. These include decision support, knowledge representation, nonmonotonic reasoning, and multiagent systems. Moreover, computational argumentation provides a formal investigation of problems that have been studied informally only by philosophers, and which consequently allow for the development of computational tools for argumentation, see (Atkinson et al., 2017). Since its invention by Dung (1995), abstract argumentation based on argumentation frameworks (AFs) has become a key concept for the field. In AFs, argumentation scenarios are modeled as simple directed graphs, where the vertices represent arguments and each edge corresponds to an attack between two arguments. Besides its simplicity, there are several reasons for the success story of this concept: First, a multitude of semantics (Baroni et al., 2011, 2018) allows for tight coupling of argumentation with existing formalisms from the areas of knowledge representation and logic programming; indeed, one of the main motivations of Dung's work (Dung, 1995) was to give a uniform representation of several nonmonotonic formalisms including Reiter's Default Logic, Pollock's Defeasible Logic, and Logic Programming (LP) with default negation; the latter lead to a series of works that investigated the relationship between different LP semantics and different AF semantics, see e.g.


On Uniform Equivalence of Epistemic Logic Programs

arXiv.org Artificial Intelligence

Epistemic Logic Programs (ELPs) extend Answer Set Programming (ASP) with epistemic negation and have received renewed interest in recent years. This led to the development of new research and efficient solving systems for ELPs. In practice, ELPs are often written in a modular way, where each module interacts with other modules by accepting sets of facts as input, and passing on sets of facts as output. An interesting question then presents itself: under which conditions can such a module be replaced by another one without changing the outcome, for any set of input facts? This problem is known as uniform equivalence, and has been studied extensively for ASP. For ELPs, however, such an investigation is, as of yet, missing. In this paper, we therefore propose a characterization of uniform equivalence that can be directly applied to the language of state-of-the-art ELP solvers. We also investigate the computational complexity of deciding uniform equivalence for two ELPs, and show that it is on the third level of the polynomial hierarchy.


Summary Report of the Second International Competition on Computational Models of Argumentation

AI Magazine

One of NIST's research areas has been the quantification of Each team's system is faced with challenges such as The goal of ARIAC is to solidify the shown in figure 1. The organizers chose kitting field of robot agility, while also progressing the state because of its similarity to assembly. Teams were tasked with assembling a robotic system's (robot, controller, and sensors) ability kit both from bins of stationary parts and from a to respond to a dynamic environment. After the robotic system finished dynamic response includes handling errors like the kit, the kit was placed on an autonomous guided dropped parts or responding to changes in orders, all vehicle (AGV) and taken away. Teams were faced with such challenges as forced The competition addresses the aspect of robot dropped parts and in-process order changes.