If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Shen, Xiaoyu (Max Planck Institute Informatics) | Su, Hui (Software Institute, University of Chinese Academy of Science) | Niu, Shuzi (Software Institute, University of Chinese Academy of Science) | Demberg, Vera (Saarland University)
Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.
Several known heuristic functions can capture the input at different levels of precision, and support relaxation-refinement operations guaranteeing to converge to exact information in a finite number of steps. A natural idea is to use such refinement online, during search, yet this has barely been addressed. We do so here for local search, where relaxation refinement is particularly appealing: escape local minima not by search, but by removing them from the search surface. Thanks to convergence, such an escape is always possible. We design a family of hill-climbing algorithms along these lines. We show that these are complete, even when using helpful actions pruning. Using them with the partial delete relaxation heuristic hCFF, the best-performing variant outclasses FF's enforced hill-climbing, outperforms FF, outperforms dual-queue greedy best-first search with hFF, and in 6 IPC domains outperforms both LAMA and Mercury.
Speicher, Patrick (Saarland University) | Steinmetz, Marcel (Saarland University) | Gnad, Daniel (Saarland University) | Hoffmann, Jörg (Saarland University) | Gerevini, Alfonso (University of Brescia)
This is coarse-grained in that, for each variable, it either remembers all past values (red), or remembers only the most recent one (black). We herein introduce limited-memory state variables, that remember a subset of their most recent values. It turns out that planning is still PSPACE-complete even when the memory is large enough to store all but a single value. Nevertheless, limited memory can be used to substantially broaden a known tractable fragment of red-black planning, yielding better heuristic functions in some domains.
One traditional use of critical-path heuristic functions is as effective sufficient criteria for unsolvability. To employ this for dead-end detection, the heuristic function must be evaluated on every new state to be tested, incurring a substantial runtime overhead. We show herein that the exact same dead-end detector can be captured through a nogood, a formula phiOFF computed once prior to search. This is mostly of theoretical interest, as phiOFF is large. We obtain practical variants by instead incrementally generating a stronger nogood psi, that implies phiOFF, online during search, generalizing from already tested states to avoid future heuristic-function evaluations.
Symmetry breaking is a well-known method for search reduction. It identifies state-space symmetries prior to search, and prunes symmetric states during search. A recent proposal, star-topology decoupled search, is to search not in the state space, but in a factored version thereof, which avoids the multiplication of states across leaf components in an underlying star-topology structure. We show that, despite the much more complex structure of search states -- so-called decoupled states -- symmetry breaking can be brought to bear in this framework as well. Starting from the notion of structural symmetries over states, we identify a sub-class of such symmetries suitable for star-topology decoupled search, and we show how symmetries from that sub-class induce symmetry relations over decoupled states. We accordingly extend the routines required for search pruning and solution reconstruction. The resulting combined method can be exponentially better than both its components in theory, and this synergetic advantage is also manifested in practice: empirically, our method reliably inherits the best of its base components, and often outperforms them both.
Star-Topology Decoupled Search has recently been introduced in classical planning. It splits the planning task into a set of components whose dependencies take a star structure, where one center component interacts with possibly many leaf components. Here we address a weakness of decoupled search, namely large leaf components, whose state space is enumerated explicitly. We propose a symbolic representation of the leaf state spaces via decision diagrams, which can be dramatically smaller, and also more runtime efficient. We further introduce a symbolic version of the LM-cut heuristic, that nicely connects to our new leaf representation. We show empirically that the symbolic representation indeed pays off when the leaf components are large.
Heuristic search is one of the most successful approaches to classical planning, finding solution paths in large state spaces. A major focus has been the development of domain-independent heuristic functions. One recent method are partial delete relaxation heuristics, improving over the standard delete relaxation heuristic through imposing a set C of conjunctions to be treated as atomic. Practical methods for selecting C are based on counter-example guided abstraction refinement, where iteratively a relaxed plan is checked for conflicts and new atomic conjunctions are introduced to address these. However, in each refinement step, the choice of possible new conjunctions is huge. The literature so far offers merely one simple strategy to make that choice. Here we fill that gap, considering a sizable space of basic ranking strategies as well as combinations thereof. We furthermore devise ranking strategies for conjunction-forgetting, where the ranking pertains to the current conjunctions and thus statistics over their usefulness can be maintained. Our experiments show that ranking strategies do make a large difference in performance, and that our new strategies can be useful.
We introduce a state space search method that identifies dead-end states, analyzes the reasons for failure, and learns to avoid similar mistakes in the future. Our work is placed in classical planning. The key technique are critical-path heuristics h C , relative to a set C of conjunctions. These recognize a dead-end state s, returning h C (s) = infty, if s has no solution even when allowing to break up conjunctive subgoals into the elements of C. Our key idea is to learn C during search. Starting from a simple initial C, we augment search to identify unrecognized dead-ends s, where h C (s) < infinity. We design methods analyzing the situation at such s, adding new conjunctions into C to obtain h C (s) = infty, thus learning to recognize s as well as similar dead-ends search may encounter in the future. We furthermore learn clauses phi where s' not satisfying phi implies hC(s') = infty, to avoid the prohibitive overhead of computing h C on every search state. Arranging these techniques in a depth-first search, we obtain an algorithm approaching the elegance of clause learning in SAT, learning to refute search subtrees. Our experiments show that this can be quite powerful. On problems where dead-ends abound, the learning reliably reduces the search space by several orders of magnitude.
Despite its success, the delete relaxation has significant pitfalls. Recent work has devised the red-black planning framework, where red variables take the relaxed semantics (accumulating their values), while black variables take the regular semantics. Provided the red variables are chosen so that red-black plan generation is tractable, one can generate such a plan for every search state, and take its length as the heuristic distance estimate. Previous results were not suitable for this purpose because they identified tractable fragments for red-black plan existence, as opposed to red-black plan generation. We identify a new fragment of red-black planning, that fixes this issue. We devise machinery to efficiently generate red-black plans, and to automatically select the red variables. Experiments show that the resulting heuristics can significantly improve over standard delete relaxation heuristics.