constraint-based reasoning

Intel debuts Pohoiki Springs, a powerful neuromorphic research system for AI workloads


This morning, Intel announced the general readiness of Pohoiki Springs, a powerful self-contained neuromorphic system that's about the size of five standard servers. The company says the system will be available to members of the Intel Neuromorphic Research Community via the cloud using Intel's Nx SDK and community-contributed software components, giving them a tool to scale up their neuromorphic research and explore ways to accelerate workloads that run slowly on today's conventional architectures. Intel claims Pohoiki Springs, which was announced in July 2019, is similar in neural capacity to the brain of a small mammal, with 768 Loihi chips and 100 million neurons spread across 24 Arria10 FPGA Nahuku expansion boards (containing 32 chips each) that operate at under 500 watts. This is ostensibly a step on the path to supporting larger and more sophisticated neuromorphic workloads. In fact, just this week, Intel demonstrated that the chips can be used to "teach" an AI model to distinguish among 10 different scents.

Constraint-based Causal Structure Learning with Consistent Separating Sets

Neural Information Processing Systems

We consider constraint-based methods for causal structure learning, such as the PC algorithm or any PC-derived algorithms whose first step consists in pruning a complete graph to obtain an undirected graph skeleton, which is subsequently oriented. All constraint-based methods perform this first step of removing dispensable edges, iteratively, whenever a separating set and corresponding conditional independence can be found. Yet, constraint-based methods lack robustness over sampling noise and are prone to uncover spurious conditional independences in finite datasets. In particular, there is no guarantee that the separating sets identified during the iterative pruning step remain consistent with the final graph. In this paper, we propose a simple modification of PC and PC-derived algorithms so as to ensure that all separating sets identified to remove dispensable edges are consistent with the final graph,thus enhancing the explainability of constraint-basedmethods.

An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Neural Information Processing Systems

We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of our method subject to a verifiable geometric condition, which is closely related to the Polyak-Lojasiewicz and Mangasarian-Fromowitz conditions. In particular, when a first-order solver is used for the inner iterates, we prove that iALM finds a first-order stationary point with $\tilde{\mathcal{O}}(1/\epsilon 3)$ calls to the first-order oracle. These complexity results match the known theoretical results in the literature. We also provide strong numerical evidence on large-scale machine learning problems, including the Burer-Monteiro factorization of semidefinite programs, and a novel nonconvex relaxation of the standard basis pursuit template.

A Survey on String Constraint Solving Artificial Intelligence

They are a fundamental datatype in all the modern programming languages, and operations on strings frequently occur in disparate fields such as software analysis, model checking, database applications, web security, bioinformatics and so on[3, 11, 19, 21, 27, 28, 49, 60, 67]. Reasoning over strings requires solving arbitrarily complex string constraints, i.e., relations defined on a number of string variables. Typical examples of string constraints are string length, (dis-)equality, concatenation, substring, regular expression matching. With the term "string constraint solving" (in short, string solving or SCS) we refer to the process of modelling, processing, and solving combinatorial problems involving string constraints. We may see SCS as a declarative paradigm which falls at the intersection between constraint solving and combinatorics on words: the user states a problem with string variables and constraints, and a suitable string solver seeks a solution for that problem. Although works on the combinatorics of words were already published in the 1940s [110], the dawn of SCS dates back to the late 1980s in correspondence with the rise of Constraint Programming (CP) [114] and Constraint Logic Programming(CLP)[73] paradigms. Pioneers in this field were for example Trilogy[142], a language providing strings, integer and real constraints, and CLP(Σ) [144], an instance of the CLP scheme representing strings as regular sets. The latter in particular was the first known attempt to use string constraints like regular membership to denote regular sets.

Bringing freedom in variable choice when searching counter-examples in floating point programs Artificial Intelligence

Program verification techniques typically focus on finding counterexamples that violate properties of a program. Constraint programming offers a convenient way to verify programs by modeling their state transformations and specifying searches that seek counterexamples. Floating-point computations present additional challenges for verification given the semantic subtleties of floating point arithmetic. This paper focuses on search strategies for CSPs using floating point numbers constraint systems and dedicated to program verification. It introduces a new search heuristic based on the global number of occurrences that outperforms state-of-the-art strategies. More importantly, it demonstrates that a new technique that only branches on input variables of the verified program improve performance. It composes with a diversification technique that prevents the selection of the same variable within a fixed horizon further improving performances and reduces disparities between various variable choice heuristics. The result is a robust methodology that can tailor the search strategy according to the sought properties of the counter example.

An efficient constraint based framework forhandling floating point SMT problems Artificial Intelligence

This paper introduces the 2019 version of \us{}, a novel Constraint Programming framework for floating point verification problems expressed with the SMT language of SMTLIB. SMT solvers decompose their task by delegating to specific theories (e.g., floating point, bit vectors, arrays, ...) the task to reason about combinatorial or otherwise complex constraints for which the SAT encoding would be cumbersome or ineffective. This decomposition and encoding processes lead to the obfuscation of the high-level constraints and a loss of information on the structure of the combinatorial model. In \us{}, constraints over the floats are first class objects, and the purpose is to expose and exploit structures of floating point domains to enhance the search process. A symbolic phase rewrites each SMTLIB instance to elementary constraints, and eliminates auxiliary variables whose presence is counterproductive. A diversification technique within the search steers it away from costly enumerations in unproductive areas of the search space. The empirical evaluation demonstrates that the 2019 version of \us{} is competitive on computationally challenging floating point benchmarks that induce significant search efforts even for other CP solvers. It highlights that the ability to harness both inference and search is critical. Indeed, it yields a factor 3 improvement over Colibri and is up to 10 times faster than SMT solvers. The evaluation was conducted over 214 benchmarks (The Griggio suite) which is a standard within SMTLIB.

Injecting Domain Knowledge in Neural Networks: a Controlled Experiment on a Constrained Problem Artificial Intelligence

Given enough data, Deep Neural Networks (DNNs) are capable of learning complex input-output relations with high accuracy. In several domains, however, data is scarce or expensive to retrieve, while a substantial amount of expert knowledge is available. It seems reasonable that if we can inject this additional information in the DNN, we could ease the learning process. One such case is that of Constraint Problems, for which declarative approaches exists and pure ML solutions have obtained mixed success. Using a classical constrained problem as a case study, we perform controlled experiments to probe the impact of progressively adding domain and empirical knowledge in the DNN. Our results are very encouraging, showing that (at least in our setup) embedding domain knowledge at training time can have a considerable effect and that a small amount of empirical knowledge is sufficient to obtain practically useful results.

ORCSolver: An Efficient Solver for Adaptive GUI Layout with OR-Constraints Artificial Intelligence

OR-constrained (ORC) graphical user interface layouts unify conventional constraint-based layouts with flow layouts, which enables the definition of flexible layouts that adapt to screens with different sizes, orientations, or aspect ratios with only a single layout specification. Unfortunately, solving ORC layouts with current solvers is time-consuming and the needed time increases exponentially with the number of widgets and constraints. To address this challenge, we propose ORCSolver, a novel solving technique for adaptive ORC layouts, based on a branch-and-bound approach with heuristic preprocessing. We demonstrate that ORCSolver simplifies ORC specifications at runtime and our approach can solve ORC layout specifications efficiently at near-interactive rates.

Automatic Cost Function Learning with Interpretable Compositional Networks Artificial Intelligence

Cost Function Networks (CFN) are a formalism in Constraint Programming to model combinatorial satisfaction or optimization problems. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressivity of regular CSP/COP formalisms but at a price of making harder the problem modeling. Indeed, in addition to regular variables/domains/constraints sets, one must provide a set of cost functions that are not always easy to define. Here we propose a method to automatically learn a cost function of a constraint, given a function deciding if assignments are valid or not. This is to the best of our knowledge the first attempt to automatically learn cost functions. Our method aims to learn cost functions in a supervised fashion, trying to reproduce the Hamming distance, by using a variation of neural networks we named Interpretable Compositional Networks, allowing us to get explainable results, unlike regular artificial neural networks. We experiment it on 5 different constraints to show its versatility. Experiments show that functions learned on small dimensions scale on high dimensions, outputting a perfect or near-perfect Hamming distance for most constraints. Our system can be used to automatically generate cost functions and then having the expressivity of CFN with the same modeling effort than for CSP/COP.

Arc-Consistency computes the minimal binarised domains of an STP. Use of the result in a TCSP solver, in a TCSP-based job shop scheduler, and in generalising Dijkstra's one-to-all algorithm Artificial Intelligence

TCSPs (Temporal Constraint Satisfaction Problems), as defined in [Dechter et al., 1991], get rid of unary constraints by binarising them after having added an "origin of the world" variable. The constraints are therefore exclusively binary; additionally, a TCSP verifies the property that it is node-consistent and arc-consistent. Path-consistency, the next higher local consistency, solves the consistency problem of a convex TCSP, referred to in [Dechter et al., 1991] as an STP (Simple Temporal Problem); more than that, the output of path-consistency applied to an n+1-variable STP is a minimal and strongly n+1-consistent STP. Weaker versions of path-consistency, aimed at avoiding what is referred to in [Schwalb and Dechter, 1997] as the "fragmentation problem", are used as filtering procedures in recursive backtracking algorithms for the consistency problem of a general TCSP. In this work, we look at the constraints between the "origin of the world" variable and the other variables, as the (binarised) domains of these other variables. With this in mind, we define a notion of arc-consistency for TCSPs, which we will refer to as binarised-domains Arc-Consistency, or bdArc-Consistency for short. We provide an algorithm achieving bdArc-Consistency for a TCSP, which we will refer to as bdAC3, for it is an adaptation of Mackworth's [1977] well-known arc-consistency algorithm AC3. We show that bdArc-Consistency computes the minimal (binarised) domains of an STP. We then show how to use the result in a general TCSP solver, in a TCSP-based job shop scheduler, and in generalising the well-known Dijkstra's one-to-all shortest paths algorithm.