Plotting

 Country


The Exact Closest String Problem as a Constraint Satisfaction Problem

arXiv.org Artificial Intelligence

We report (to our knowledge) the first evaluation of Constraint Satisfaction as a computational framework for solving closest string problems. We show that careful consideration of symbol occurrences can provide search heuristics that provide several orders of magnitude speedup at and above the optimal distance. We also report (to our knowledge) the first analysis and evaluation -- using any technique -- of the computational difficulties involved in the identification of all closest strings for a given input set. We describe algorithms for web-scale distributed solution of closest string problems, both purely based on AI backtrack search and also hybrid numeric-AI methods.


Joint Structured Models for Extraction from Overlapping Sources

arXiv.org Artificial Intelligence

We consider the problem of jointly training structured models for extraction from sources whose instances enjoy partial overlap. This has important applications like user-driven ad-hoc information extraction on the web. Such applications present new challenges in terms of the number of sources and their arbitrary pattern of overlap not seen by earlier collective training schemes applied on two sources. We present an agreement-based learning framework and alternatives within it to trade-off tractability, robustness to noise, and extent of agreement. We provide a principled scheme to discover low-noise agreement sets in unlabeled data across the sources. Through extensive experiments over 58 real datasets, we establish that our method of additively rewarding agreement over maximal segments of text provides the best trade-offs, and also scores over alternatives such as collective inference, staged training, and multi-view learning.


Combined Task and Motion Planning for Mobile Manipulation

AAAI Conferences

We present a hierarchical planning system and its application to robotic manipulation.ย  The novel features of the system are: 1) it finds high-quality kinematic solutions to task-level problems; 2) it takes advantage of subtask-specific irrelevance information, reusing optimal solutions to state-abstracted subproblems across the search space.ย  We briefly describe how the system handles uncertainty during plan execution, and present results on discrete problems as well as pick-and-place tasks for a mobile robot.


G-Value Plateaus: A Challenge for Planning

AAAI Conferences

While the string of successes found in using heuristic, best-first search methods have provided positive reinforcement for continuing work along these lines, fundamental problems arise when handling objectives whose value does not change with search operations. An extreme case of this occurs when handling the objective of generating a temporal plan with short makespan. Typically used heuristic search methods assume strictly positive edge costs for their guarantees on completeness and optimality, while the usual ``fattening'' and ``advance time'' steps of heuristic search for temporal planning have the potential of resulting in ``g-value plateaus''. In this paper we point out some underlying difficulties with using modern heuristic search methods when operating over g-value plateaus and discuss how the presence of these problems contributes to the poor performance of heuristic search planners. To further illustrate this, we show empirical results on recent benchmarks using a planner made with makespan optimization in mind.


Iterative Learning of Weighted Rule Sets for Greedy Search

AAAI Conferences

Greedy search is commonly used in an attempt to generate solutions quickly at the expense of completeness and optimality. In this work, we consider learning sets of weighted action-selection rules for guiding greedy search with application to automated planning. We make two primary contributions over prior work on learning for greedy search. First, we introduce weighted sets of action-selection rules as a new form of control knowledge for greedy search. Prior work has shown the utility of action-selection rules for greedy search, but has treated the rules as hard constraints, resulting in brittleness. Our weighted rule sets allow multiple rules to vote, helping to improve robustness to noisy rules. Second, we give a new iterative learning algorithm for learning weighted rule sets based on RankBoost, an efficient boosting algorithm for ranking. Each iteration considers the actual performance of the current rule set and directs learning based on the observed search errors. This is in contrast to most prior approaches, which learn control knowledge independently of the search process. Our empirical results have shown significant promise for this approach in a number of domains.


Genome Rearrangement and Planning: Revisited

AAAI Conferences

Evolutionary trees of species can be reconstructed by pairwise comparison of their entire genomes. Such a comparison can be quantified by determining the number of events that change the order of genes in a genome. Earlier Erdem and Tillier formulated the pairwise comparison of entire genomes as the problem of planning rearrangement events that transform one genome to the other. We reformulate this problem as a planning problem to extend its applicability to genomes with multiple copies of genes and with unequal gene content, and illustrate its applicability and effectiveness on three real datasets: mitochondrial genomes of Metazoa, chloroplast genomes of Campanulaceae, chloroplast genomes of various land plants and green algae.


Perfect Hashing for State Space Exploration on the GPU

AAAI Conferences

This paper exploits parallel computing power of graphics cards to accelerate state space search. We illustrate that modern graphics processing units (GPUs) have the potential to speed up breadth-first search significantly. For a bitvector representation of the search frontier, GPU algorithms with one and two bits per state are presented. Efficient perfect hash functions and their inverse are explored in order to achieve enhanced compression. We report maximal speed-ups of up to a factor of 27 wrt. single core CPU computation.


Shopper: A System for Executing and Simulating Expressive Plans

AAAI Conferences

We present Shopper, a plan execution engine that facilitates experimental evaluation of plans and makes it easier for planning researchers to incorporate replanning. Shopper interprets the LTML plan language, which extends PDDL in two major ways: with more expressive control structures, and with support for semantic web services modeled on OWL-S. LTML's command structures include not only conventional ones such as branching, iteration, and procedure calls, but also features needed to handle HTN plans, such as precondition-filtered method choice. Unlike conventional programming languages, LTML supports interaction with the agent's belief store, so that its execution semantics line up with those assumed by planners. LTML actions extend PDDL actions in having outputs as well as effects, which means that they can support actions that sense the world; an important special case of this is semantic web services, which reveal information about a state hidden from the agent. To support experimentation as well as action in the real world, Shopper accommodates multiple, swappable implementations of its primitive action API. For example, one may interact with real web services through SOAP and WSDL, or with simulated web services through local procedure calls. We describe novel features of LTML, the interpretation strategy, swappable back-ends, and the implementation.


Using Backwards Generated Goals for Heuristic Planning

AAAI Conferences

Forward State Planning with Reachability Heuristics is arguably the most successful approach to Automated Planning up to date. In addition to an estimation of the distance to the goal, relaxed plans obtained with such heuristics provide the search with useful information such as helpful actions and look-ahead states. However, this information is extracted only from the beginning of the relaxed plan. In this paper, we propose using information extracted from the last actions in the relaxed plan to generate intermediate goals backwards. This allows us to use information from previous computations of the heuristic and reduce the depth of the search tree.


When Policies Can Be Trusted: Analyzing a Criteria to Identify Optimal Policies in MDPs with Unknown Model Parameters

AAAI Conferences

Computing a good policy in stochastic uncertain environments with unknown dynamics and reward model parameters is a challenging task. In a number of domains, ranging from space robotics to epilepsy management, it may be possible to have an initial training period when suboptimal performance is permitted. For such problems it is important to be able to identify when this training period is complete, and the computed policy can be used with high confidence in its future performance. A simple principled criteria for identifying when training has completed is when the error bounds on the value estimates of the current policy are sufficiently small that the optimal policy is fixed, with high probability. We present an upper bound on the amount of training data required to identify the optimal policy as a function of the unknown separation gap between the optimal and the next-best policy values. We illustrate with several small problems that by estimating this gap in an online manner, the number of training samples to provably reach optimality can be significantly lower than predicted offline using a Probably Approximately Correct framework that requires an input epsilon parameter.