Not enough data to create a plot.
Try a different view from the menu above.
Journal of Artificial Intelligence Research
On the Parallel Parameterized Complexity of MaxSAT Variants
Bannach, Max (University of Lübeck) | Skambath, Malte (Kiel University) | Tantau, Till (a:1:{s:5:"en_US";s:21:"University of Lübeck";})
In the maximum satisfiability problem (max-sat) we are given a propositional formula in conjunctive normal form and have to find an assignment that satisfies as many clauses as possible. We study the parallel parameterized complexity of various versions of max-sat and provide the first constant-time algorithms parameterized either by the solution size or by the allowed excess relative to some guarantee. For the dual parameterized version where the parameter is the number of clauses we are allowed to leave unsatisfied, we present the first parallel algorithm for max-2sat (known as almost-2sat). The difficulty in solving almost-2sat in parallel comes from the fact that the iterative compression method, originally developed to prove that the problem is fixed-parameter tractable at all, is inherently sequential. We observe that a graph flow whose value is a parameter can be computed in parallel and develop a parallel algorithm for the vertex cover problem parameterized above the size of a given matching. Finally, we study the parallel complexity of max-sat parameterized by the vertex cover number, the treedepth, the feedback vertex set number, and the treewidth of the input's incidence graph. While max-sat is fixedparameter tractable for all of these parameters, we show that they allow different degrees of possible parallelization. For all four we develop dedicated parallel algorithms that are constructive, meaning that they output an optimal assignment - in contrast to results that can be obtained by parallel meta-theorems, which often only solve the decision version.
Actor Prioritized Experience Replay
Saglam, Baturay (Bilkent University) | Mutlu, Furkan B. (Bilkent University) | Cicek, Dogan C. (Bilkent University) | Kozat, Suleyman S. (Bilkent University)
A widely-studied deep reinforcement learning (RL) technique known as Prioritized Experience Replay (PER) allows agents to learn from transitions sampled with non-uniform probability proportional to their temporal-difference (TD) error. Although it has been shown that PER is one of the most crucial components for the overall performance of deep RL methods in discrete action domains, many empirical studies indicate that it considerably underperforms off-policy actor-critic algorithms. We theoretically show that actor networks cannot be effectively trained with transitions that have large TD errors. As a result, the approximate policy gradient computed under the Q-network diverges from the actual gradient computed under the optimal Q-function. Motivated by this, we introduce a novel experience replay sampling framework for actor-critic methods, which also regards issues with stability and recent findings behind the poor empirical performance of PER. The introduced algorithm suggests a new branch of improvements to PER and schedules effective and efficient training for both actor and critic networks. An extensive set of experiments verifies our theoretical findings, showing that our method outperforms competing approaches and achieves state-of-the-art results over the standard off-policy actor-critic algorithms.
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Jacovi, Alon (Bar Ilan University and Google Research) | Bastings, Jasmijn (Google Research) | Gehrmann, Sebastian (Google Research) | Goldberg, Yoav (Bar Ilan University and the Allen Institute for Artificial Intelligence) | Filippova, Katja (Google Research)
We investigate a formalism for the conditions of a successful explanation of AI. We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it. Theory of mind literature discusses the folk concepts that humans use to understand and generalize behavior. We posit that folk concepts of behavior provide us with a "language" that humans understand behavior with. We use these folk concepts as a framework of social attribution by the human explainee--the information constructs that humans are likely to comprehend from explanations--by introducing a blueprint for an explanatory narrative (Figure 1) that explains AI behavior with these constructs. We then demonstrate that many XAI methods today can be mapped to folk concepts of behavior in a qualitative evaluation. This allows us to uncover their failure modes that prevent current methods from explaining successfully--i.e., the information constructs that are missing for any given XAI method, and whose inclusion can decrease the likelihood of misunderstanding AI behavior.
Clustering what Matters: Optimal Approximation for Clustering with Outliers
Agrawal, Akanksha (Indian Institute of Technology, Madras) | Inamdar, Tanmay (a:1:{s:5:"en_US";s:20:"University of Bergen";}) | Saurabh, Saket (Institute of Mathematical Sciences, Chennai, India) | Xue, Jie (NYU Shanghai)
Clustering with outliers is one of the most fundamental problems in Computer Science. Given a set X of n points and two numbers k, m, the clustering with outliers aims to exclude m points from X and partition the remaining points into k clusters that minimizes a certain cost function. In this paper, we give a general approach for solving clustering with outliers, which results in a fixed-parameter tractable (FPT) algorithm in k and m—i.e., an algorithm with running time of the form f(k, m) · nO(1) for some function f—that almost matches the approximation ratio for its outlier-free counterpart. As a corollary, we obtain FPT approximation algorithms with optimal approximation ratios for k-Median and k-Means with outliers in general and Euclidean metrics. We also exhibit more applications of our approach to other variants of the problem that impose additional constraints on the clustering, such as fairness or matroid constraints.
A Benchmark Study on Knowledge Graphs Enrichment and Pruning Methods in the Presence of Noisy Relationships
Faralli, Stefano (Sapienza University of Rome) | Lenzi, Andrea (Sapienza University of Rome) | Velardi, Paola (Sapienza University of Rome)
In the past few years, knowledge graphs (KGs), as a form of structured human intelligence, have attracted considerable research attention from academia and industry. In this very active field of study, a widely explored problem is that of link prediction, the task of predicting whether two nodes should be connected, based on node attributes and local or global graph connectivity properties. The state of the art in this area is represented by techniques based on graph embeddings. However, KGs, especially those acquired using automated or partly automated techniques, are often riddled with noise, e.g., wrong relationships, which makes the problem of link deletion as important as that of link prediction. In this paper, we address three main research questions. The first is about the true effectiveness of different knowledge graph embedding models under the presence of an increasing number of wrong links. The second is to asses if methods that can predict unknown relationships effectively, work equally well in recognizing incorrect relations. The third is to verify if there are systems robust enough to maintain primacy in all experimental conditions. To answer these research questions, we performed a systematic benchmark study in which the experimental setting includes ten state-of-the-art models, three common KG datasets with different structural properties and three downstream tasks: the widely explored tasks of link prediction and triple classification, and the less popular task of link deletion. Comparative studies often yield contradictory results, where the same systems score better or worse depending on the experimental context. In our work, in order to facilitate the discovery of clear performance patterns and their interpretation, we select and/or aggregate performance data to highlight each specific comparison dimension: dataset complexity, type of task, category of models, and robustness against noise.
Optimality Guarantees for Particle Belief Approximation of POMDPs
Lim, Michael H. (a:1:{s:5:"en_US";s:11:"UC Berkeley";}) | Becker, Tyler J. | Kochenderfer, Mykel J. | Tomlin, Claire J. | Sunberg, Zachary N.
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP . Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of O (C), where C is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
Automatically Finding the Right Probabilities in Bayesian Networks
Salmani, Bahare (a:1:{s:5:"en_US";s:22:"RWTH Aachen University";}) | Katoen, Joost-Pieter (RWTH Aachen University)
This paper presents alternative techniques for inference on classical Bayesian networks in which all probabilities are fixed, and for synthesis problems when conditional probability tables (CPTs) in such networks contain symbolic parameters rather than concrete probabilities. The key idea is to exploit probabilistic model checking as well as its recent extension to parameter synthesis techniques for parametric Markov chains. To enable this, the Bayesian networks are transformed into Markov chains and their objectives are mapped onto probabilistic temporal logic formulas. For exact inference, we compare probabilistic model checking to weighted model counting on various Bayesian network benchmarks. We contrast symbolic model checking using multi-terminal binary (aka: algebraic) decision diagrams to symbolic inference using proba- bilistic sentential decision diagrams, symbolic data structures that are tailored to Bayesian networks. For the parametric setting, we describe how our techniques can be used for various synthesis problems such as computing sensitivity functions (and values), simple and difference parameter tuning and ratio parameter tuning. Our parameter synthesis techniques are applicable to arbitrarily many, possibly dependent, parameters that may occur in multiple CPTs. This lifts restrictions, e.g., on the number of parametrized CPTs, or on parameter dependencies between several CPTs, that exist in the literature. Experiments on several benchmarks show that our parameter synthesis techniques can treat parameter synthesis for Bayesian networks (with hundreds of unknown parameters) that are out of reach for existing techniques.
Improved Peel-and-Bound: Methods for Generating Dual Bounds with Multivalued Decision Diagrams
Rudich, Isaac (a:1:{s:5:"en_US";s:23:"Polytechnique Montréal";}) | Cappart, Quentin (Polytechnique Montréal) | Rousseau, Louis-Martin (Polytechnique Montréal)
Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. In the original version of this paper, we presented a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We tested the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost. In this extended version of the paper, we also propose new methods for using relaxed decision diagrams to improve the solutions found using restricted decision diagrams, discuss the heuristic decisions involved with the parallelization of peel-and-bound, and discuss how peel-and-bound can be hyper-optimized for sequencing problems. Furthermore, we test the new methods on the sequence ordering problem and the traveling salesman problem with time-windows (TSPTW), and include an updated and generalized implementation of the algorithm capable of handling any discrete optimization problem. The new results show that peel-and-bound outperforms ddo (a decision diagram based branch-and-bound solver) on the TSPTW. We also close 15 open benchmark instances of the TSPTW.
Certified Dominance and Symmetry Breaking for Combinatorial Optimisation
Bogaerts, Bart (Vrije Universiteit Brussel ) | Gocht, Stephan | McCreesh, Ciaran | Nordström, Jakob
Symmetry and dominance breaking can be crucial for solving hard combinatorial search and optimisation problems, but the correctness of these techniques sometimes relies on subtle arguments. For this reason, it is desirable to produce efficient, machine-verifiable certificates that solutions have been computed correctly. Building on the cutting planes proof system, we develop a certification method for optimisation problems in which symmetry and dominance breaking is easily expressible. Our experimental evaluation demonstrates that we can efficiently verify fully general symmetry breaking in Boolean satisfiability (SAT) solving, thus providing, for the first time, a unified method to certify a range of advanced SAT techniques that also includes cardinality and parity (XOR) reasoning. In addition, we apply our method to maximum clique solving and constraint programming as a proof of concept that the approach applies to a wider range of combinatorial problems.
Classes of Hard Formulas for QBF Resolution
Schleitzer, Agnes (a:1:{s:5:"en_US";s:18:"University of Jena";}) | Beyersdorff, Olaf (Friedrich-Schiller-Universit¨at Jena, Fakult¨at f¨ur Mathematik und Informatik, Institut f¨ur Informatik)
To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as Q-Res and QU-Res, and only one specific QBF family to separate Q-Res and QU-Res. Here we provide a general method to construct hard formulas for Q-Res and QU-Res. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (Σb2 formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating Q-Res and QU-Res, and for separating Q-Res and LD-Q-Res.