"The Crossword puzzle (CP) is a simple problem to illustrate the formalization process of a problem into a CSP. The problem is to place words of a dictionary in a given structure satisfying certain constraints. The variables are the rows and columns in the crossword, and their values are the words in a dictionary."
– Marc Torrens. An Application using the JCL: The Air Travel Planning System. Diploma Thesis, 1997, Chapter 1, Section 1.2.1.
Procedural content generation via machine learning (PCGML) is typically framed as the task of fitting a generative model to full-scale examples of a desired content distribution. This approach presents a fundamental tension: the more design effort expended to produce detailed training examples for shaping a generator, the lower the return on investment from applying PCGML in the first place. In response, we propose the use of discriminative models (which capture the validity of a design rather the distribution of the content) trained on positive and negative examples. Through a modest modification of WaveFunctionCollapse, a commercially-adopted PCG approach that we characterize as using elementary machine learning, we demonstrate a new mode of control for learning-based generators. We demonstrate how an artist might craft a focused set of additional positive and negative examples by critique of the generator's previous outputs. This interaction mode bridges PCGML with mixed-initiative design assistance tools by working with a machine to define a space of valid designs rather than just one new design.
The performance of enumerating all solutions to an instance of Langford's Problem is sensitive to the model and the search strategy. In this paper we compare the performance of a large variety of models, all derived from two base viewpoints. We empirically show that a channelled model with a static branching order on one of the viewpoints offers the best performance out of all the options we consider. Surprisingly, one of the base models proves very effective for propagation, while the other provides an effective means of stating a static search order.
Comprehending procedural text, e.g., a paragraph describing photosynthesis, requires modeling actions and the state changes they produce, so that questions about entities at different timepoints can be answered. Although several recent systems have shown impressive progress in this task, their predictions can be globally inconsistent or highly improbable. In this paper, we show how the predicted effects of actions in the context of a paragraph can be improved in two ways: (1) by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and (2) by biasing reading with preferences from large-scale corpora (e.g., trees rarely move). Unlike earlier methods, we treat the problem as a neural structured prediction task, allowing hard and soft constraints to steer the model away from unlikely predictions. We show that the new model significantly outperforms earlier systems on a benchmark dataset for procedural text comprehension (+8% relative gain), and that it also avoids some of the nonsensical predictions that earlier systems make.
Detecting small sets of relevant patterns from a given dataset is a central challenge in data mining. The relevance of a pattern is based on user-provided criteria; typically, all patterns that satisfy certain criteria are considered relevant. Rule-based languages like Answer Set Programming (ASP) seem well-suited for specifying such criteria in a form of constraints. Although progress has been made, on the one hand, on solving individual mining problems and, on the other hand, developing generic mining systems, the existing methods either focus on scalability or on generality. In this paper we make steps towards combining local (frequency, size, cost) and global (various condensed representations like maximal, closed, skyline) constraints in a generic and efficient way. We present a hybrid approach for itemset, sequence and graph mining which exploits dedicated highly optimized mining systems to detect frequent patterns and then filters the results using declarative ASP. To further demonstrate the generic nature of our hybrid framework we apply it to a problem of approximately tiling a database. Experiments on real-world datasets show the effectiveness of the proposed method and computational gains for itemset, sequence and graph mining, as well as approximate tiling. Under consideration in Theory and Practice of Logic Programming (TPLP).
Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models is not trivial. We propose a novel method, called Sketched Answer Set Programming (SkASP), aiming at supporting the user in resolving this issue. The user writes an ASP program while marking uncertain parts open with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. The sketched model is rewritten into another ASP program, which is solved by traditional methods. As a result, the user obtains a functional and reusable ASP program modelling her problem. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karp's 21 NP-complete problems and demonstrate a use-case for a database application based on ASP.
In this post, I'm going to build an example of artificial intelligence in the form of a Constraint Satisfaction Problem (or CSP), showing how much mathematics, logic skills, and computer science knowledge can help in the process. For this purpose, I took a puzzle game called Hitori on the popular logic puzzle website Nikoli. I didn't choose Hitori because it was convenient, I literally chose a random game precisely because it didn't matter what the game was for what I wanted to show. Let's begin by learning what a CSP actually is. CSPs are mathematical problems defined as a set of objects whose state must satisfy a number of constraints or limitations.
Every Constraint Programming (CP) solver exposes a library of constraints for solving combinatorial problems. In order to be useful, CP solvers need to be bug-free. Therefore the testing of the solver is crucial to make developers and users confident. We present a Java library allowing any JVM based solver to test that the implementations of the individual constraints are correct. The library can be used in a test suite executed in a continuous integration tool or it can also be used to discover minimalist instances violating some properties (arc-consistency, etc) in order to help the developer to identify the origin of the problem using standard debuggers.
This is one of only a handful couple of writings that consolidates three fundamental postulations in the investigation of rationale programming: the logic that gives logic programs their extraordinary character: the act of programming viably utilizing the logic; and the productive usage of logic software on PCs.
A number of problems in relational Artificial Intelligence can be viewed as Stochastic Constraint Optimization Problems (SCOPs). These are constraint optimization problems that involve objectives or constraints with a stochastic component. Building on the recently proposed language SC-ProbLog for modeling SCOPs, we propose a new method for solving these problems. Earlier methods used Probabilistic Logic Programming (PLP) techniques to create Ordered Binary Decision Diagrams (OBDDs), which were decomposed into smaller constraints in order to exploit existing constraint programming (CP) solvers. We argue that this approach has as drawback that a decomposed representation of an OBDD does not guarantee domain consistency during search, and hence limits the efficiency of the solver. For the specific case of monotonic distributions, we suggest an alternative method for using CP in SCOP, based on the development of a new propagator; we show that this propagator is linear in the size of the OBDD, and has the potential to be more efficient than the decomposition method, as it maintains domain consistency.
We examine the phase transition phenomenon for the Knapsack problem from both a computational and a human perspective. We first provide, via an empirical and a theoretical analysis, a characterization of the phenomenon in terms of two instance properties; normalised capacity and normalised profit. Then, we show evidence that average time spent by human decision makers in solving an instance peaks near the phase transition. Given the ubiquity of the Knapsack problem in every-day life, a better understanding of its structure can improve our understanding not only of computational techniques but also of human behavior, including the use and development of heuristics and occurrence of biases.