Goto

Collaborating Authors

PDF


Prediction of Social Dynamic Agents and Long-Tailed Learning Challenges: A Survey

Journal of Artificial Intelligence Research

Autonomous robots that can perform common tasks like driving, surveillance, and chores have the biggest potential for impact due to frequency of usage, and the biggest potential for risk due to direct interaction with humans. These tasks take place in openended environments where humans socially interact and pursue their goals in complex and diverse ways. To operate in such environments, such systems must predict this behaviour, especially when the behavior is unexpected and potentially dangerous. Therefore, we summarize trends in various types of tasks, modeling methods, datasets, and social interaction modules aimed at predicting the future location of dynamic, socially interactive agents. Furthermore, we describe long-tailed learning techniques from classification and regression problems that can be applied to prediction problems. To our knowledge this is the first work that reviews social interaction modeling within prediction, and long-tailed learning techniques within regression and prediction.


Automatically Finding the Right Probabilities in Bayesian Networks

Journal of Artificial Intelligence Research

This paper presents alternative techniques for inference on classical Bayesian networks in which all probabilities are fixed, and for synthesis problems when conditional probability tables (CPTs) in such networks contain symbolic parameters rather than concrete probabilities. The key idea is to exploit probabilistic model checking as well as its recent extension to parameter synthesis techniques for parametric Markov chains. To enable this, the Bayesian networks are transformed into Markov chains and their objectives are mapped onto probabilistic temporal logic formulas. For exact inference, we compare probabilistic model checking to weighted model counting on various Bayesian network benchmarks. We contrast symbolic model checking using multi-terminal binary (aka: algebraic) decision diagrams to symbolic inference using proba- bilistic sentential decision diagrams, symbolic data structures that are tailored to Bayesian networks. For the parametric setting, we describe how our techniques can be used for various synthesis problems such as computing sensitivity functions (and values), simple and difference parameter tuning and ratio parameter tuning. Our parameter synthesis techniques are applicable to arbitrarily many, possibly dependent, parameters that may occur in multiple CPTs. This lifts restrictions, e.g., on the number of parametrized CPTs, or on parameter dependencies between several CPTs, that exist in the literature. Experiments on several benchmarks show that our parameter synthesis techniques can treat parameter synthesis for Bayesian networks (with hundreds of unknown parameters) that are out of reach for existing techniques.


Optimality Guarantees for Particle Belief Approximation of POMDPs

Journal of Artificial Intelligence Research

Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP . Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of O (C), where C is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.


Classes of Hard Formulas for QBF Resolution

Journal of Artificial Intelligence Research

To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as Q-Res and QU-Res, and only one specific QBF family to separate Q-Res and QU-Res. Here we provide a general method to construct hard formulas for Q-Res and QU-Res. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (Σb2 formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating Q-Res and QU-Res, and for separating Q-Res and LD-Q-Res.


Certified Dominance and Symmetry Breaking for Combinatorial Optimisation

Journal of Artificial Intelligence Research

Symmetry and dominance breaking can be crucial for solving hard combinatorial search and optimisation problems, but the correctness of these techniques sometimes relies on subtle arguments. For this reason, it is desirable to produce efficient, machine-verifiable certificates that solutions have been computed correctly. Building on the cutting planes proof system, we develop a certification method for optimisation problems in which symmetry and dominance breaking is easily expressible. Our experimental evaluation demonstrates that we can efficiently verify fully general symmetry breaking in Boolean satisfiability (SAT) solving, thus providing, for the first time, a unified method to certify a range of advanced SAT techniques that also includes cardinality and parity (XOR) reasoning. In addition, we apply our method to maximum clique solving and constraint programming as a proof of concept that the approach applies to a wider range of combinatorial problems.


Qualitative Reasoning about 2D Cardinal Directions using Answer Set Programming

Journal of Artificial Intelligence Research

We introduce a formal framework (called NCDC-ASP) for representing and reasoning about cardinal directions between extended spatial objects on a plane, using Answer Set Programming (ASP). NCDC-ASP preserves the meaning of cardinal directional relations as in Cardinal Directional Calculus (CDC), and provides solutions to all consistency checking problems in CDC under various conditions (i.e., for a complete/incomplete set of basic/disjunctive CDC constraints over connected/disconnected spatial objects). In particular, NCDC-ASP models a discretized version of the consistency checking problem in ASP, over a finite grid (rather than a plane), where we provide new lower bounds on the grid size to guarantee that it correctly characterizes solutions for the consistency checking in CDC. In addition, NCDC-ASP has the following two novelties important for applications. NCDC-ASP introduces default CDC constraints to represent and reason about background or commonsense knowledge that involves default qualitative directional relations (e.g., "the ice cream truck is by default to the north of the playground" or "the keyboard is normally placed in front of the monitor"). NCDC-ASP introduces inferred CDC constraints to allow inference of missing CDC relations and to provide them as explanations. We illustrate the uses and usefulness of NCDC-ASP with interesting scenarios from the real-world. We design and develop a variety of benchmark instances, and comprehensively evaluate NCDC-ASP from the perspectives of computational efficiency.


Dynamic Controllability of Temporal Plans in Uncertain and Partially Observable Environments

Journal of Artificial Intelligence Research

The formalism of Simple Temporal Networks (STNs) provides methods for evaluating the feasibility of temporal plans. The basic formalism deals with the consistency of quantitative temporal requirements on scheduled events. This implicitly assumes a single agent has full control over the timing of events. The extension of Simple Temporal Networks with Uncertainty (STNU) introduces uncertainty into the timing of some events. Two main approaches to the feasibility of STNUs involve (1) where a single schedule works irrespective of the duration outcomes, called Strong Controllability, and (2) whether a strategy exists to schedule future events based on the outcomes of past events, called Dynamic Controllability. Case (1) essentially assumes the timing of uncertain events cannot be observed by the agent while case (2) assumes full observability. The formalism of Partially Observable Simple Temporal Networks with Uncertainty (POSTNU) provides an intermediate stance between these two extremes, where a known subset of the uncertain events can be observed when they occur. A sound and complete polynomial algorithm to determining the Dynamic Controllability of POSTNUs has not previously been known; we present one in this paper. This answers an open problem that has been posed in the literature. The approach we take factors the problem into Strong Controllability micro-problems in an overall Dynamic Controllability macro-problem framework. It generalizes the notion of labeled distance graph from STNUs. The generalized labels are expressed as max/min expressions involving the observables. The paper introduces sound generalized reduction rules that act on the generalized labels. These incorporate tightenings based on observability that preserve dynamic viable strategies. It is shown that if the generalized reduction rules reach quiescence without exposing an inconsistency, then the POSTNU is Dynamically Controllable (DC). The paper also presents algorithms that apply the reduction rules in an organized way and reach quiescence in a polynomial number of steps if the POSTNU is Dynamically Controllable. Remarkably, the generalized perspective leads to a simpler and more uniform framework that applies also to the STNU special case. It helps illuminate the previous methods inasmuch as the max/min label representation is more semantically clear than the ad-hoc upper/lower case labels previously used.


Complexity of Computing the Shapley Value in Partition Function Form Games

Journal of Artificial Intelligence Research

We study the complexity of computing the Shapley value in partition function form games. We focus on two representations based on marginal contribution nets (embedded MC-nets and weighted MC-nets) and five extensions of the Shapley value. Our results show that while weighted MC-nets are more concise than embedded MC-nets, they have slightly worse computational properties when it comes to computing the Shapley value: two out of five extensions can be computed in polynomial time for embedded MC-nets and only one for weighted MC-nets.


Program Synthesis with Best-First Bottom-Up Search

Journal of Artificial Intelligence Research

Cost-guided bottom-up search (BUS) algorithms use a cost function to guide the search to solve program synthesis tasks. In this paper, we show that current state-of-the-art cost-guided BUS algorithms suffer from a common problem: they can lose useful information given by the model and fail to perform the search in a best-first order according to a cost function. We introduce a novel best-first bottom-up search algorithm, which we call Bee Search, that does not suffer information loss and is able to perform cost-guided bottom-up synthesis in a best-first manner. Importantly, Bee Search performs best-first search with respect to the generation of programs, i.e., it does not even create in memory programs that are more expensive than the solution program. It attains best-first ordering with respect to generation by performing a search in an abstract space of program costs. We also introduce a new cost function that better uses the information provided by an existing cost model. Empirical results on string manipulation and bit-vector tasks show that Bee Search can outperform existing cost-guided BUS approaches when employing more complex domain-specific languages (DSLs); Bee Search and previous approaches perform equally well with simpler DSLs. Furthermore, our new cost function with Bee Search outperforms previous cost functions on string manipulation tasks.


Hierarchical Decompositions and Termination Analysis for Generalized Planning

Journal of Artificial Intelligence Research

This paper presents new methods for analyzing and evaluating generalized plans that can solve broad classes of related planning problems. Although synthesis and learning of generalized plans has been a longstanding goal in AI, it remains challenging due to fundamental gaps in methods for analyzing the scope and utility of a given generalized plan. This paper addresses these gaps by developing a new conceptual framework along with proof techniques and algorithmic processes for assessing termination and goal-reachability related properties of generalized plans. We build upon classic results from graph theory to decompose generalized plans into smaller components that are then used to derive hierarchical termination arguments. These methods can be used to determine the utility of a given generalized plan, as well as to guide the synthesis and learning processes for generalized plans. We present theoretical as well as empirical results illustrating the scope of this new approach. Our analysis shows that this approach significantly extends the class of generalized plans that can be assessed automatically, thereby reducing barriers in the synthesis and learning of reliable generalized plans.