Plotting

 arXiv.org Artificial Intelligence


Logic Programming with Satisfiability

arXiv.org Artificial Intelligence

This paper presents a Prolog interface to the MiniSat satisfiability solver. Logic program- ming with satisfiability combines the strengths of the two paradigms: logic programming for encoding search problems into satisfiability on the one hand and efficient SAT solving on the other. This synergy between these two exposes a programming paradigm which we propose here as a logic programming pearl. To illustrate logic programming with SAT solving we give an example Prolog program which solves instances of Partial MAXSAT.


Menzerath-Altmann Law for Syntactic Structures in Ukrainian

arXiv.org Artificial Intelligence

In the general form, such a dependence can be formulated as follows: the longer is the construct the shorter are its constituents. Later on, this fact was put in a mathematical form by Gabriel Altmann [1]. Now it is known as the Menzerath-Altmann law and is considered to be one of the general linguistic laws with evidences reaching far beyond the linguistic domain itself [2]. The mentioned relationship is studied on various levels of language units, such as syllable-word, morpheme-word, etc. While the word-sentence seems to be the most straightforward generalization on the syntactic level, it appears that in fact an intermediate unit must be introduced in this scheme [3, p. 283]. Usually, this intermediate unit are thought to be phrases or clauses, which are direct constituents of the sentence [4]. We would like to note, however, that the notion of clause is not well elaborated in Eastern European linguistic traditions [5], including Ukrainian (cf.


Algorithmic Complexity Bounds on Future Prediction Errors

arXiv.org Artificial Intelligence

We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor $M$ from the true distribution $mu$ by the algorithmic complexity of $mu$. Here we assume we are at a time $t>1$ and already observed $x=x_1...x_t$. We bound the future prediction performance on $x_{t+1}x_{t+2}...$ by a new variant of algorithmic complexity of $mu$ given $x$, plus the complexity of the randomness deficiency of $x$. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.


A Backtracking-Based Algorithm for Computing Hypertree-Decompositions

arXiv.org Artificial Intelligence

Hypertree decompositions of hypergraphs are a generalization of tree decompositions of graphs. The corresponding hypertree-width is a measure for the cyclicity and therefore tractability of the encoded computation problem. Many NP-hard decision and computation problems are known to be tractable on instances whose structure corresponds to hypergraphs of bounded hypertree-width. Intuitively, the smaller the hypertree-width, the faster the computation problem can be solved. In this paper, we present the new backtracking-based algorithm det-k-decomp for computing hypertree decompositions of small width. Our benchmark evaluations have shown that det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree decomposition algorithm so far. Even compared to the best heuristic algorithm, we obtained competitive results as long as the hypergraphs are not too large.


On the Complexity of the Numerically Definite Syllogistic and Related Fragments

arXiv.org Artificial Intelligence

In this paper, we determine the complexity of the satisfiability problem for various logics obtained by adding numerical quantifiers, and other constructions, to the traditional syllogistic. In addition, we demonstrate the incompleteness of some recently proposed proof-systems for these logics.


Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

arXiv.org Artificial Intelligence

Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.


Distributed Control of Microscopic Robots in Biomedical Applications

arXiv.org Artificial Intelligence

Current developments in molecular electronics, motors and chemical sensors could enable constructing large numbers of devices able to sense, compute and act in micron-scale environments. Such microscopic machines, of sizes comparable to bacteria, could simultaneously monitor entire populations of cells individually in vivo. This paper reviews plausible capabilities for microscopic robots and the physical constraints due to operation in fluids at low Reynolds number, diffusion-limited sensing and thermal noise from Brownian motion. Simple distributed controls are then presented in the context of prototypical biomedical tasks, which require control decisions on millisecond time scales. The resulting behaviors illustrate trade-offs among speed, accuracy and resource use. A specific example is monitoring for patterns of chemicals in a flowing fluid released at chemically distinctive sites. Information collected from a large number of such devices allows estimating properties of cell-sized chemical sources in a macroscopic volume. The microscopic devices moving with the fluid flow in small blood vessels can detect chemicals released by tissues in response to localized injury or infection. We find the devices can readily discriminate a single cell-sized chemical source from the background chemical concentration, providing high-resolution sensing in both time and space. By contrast, such a source would be difficult to distinguish from background when diluted throughout the blood volume as obtained with a blood sample.


Efficient constraint propagation engines

arXiv.org Artificial Intelligence

This paper presents a model and implementation techniques for speeding up constraint propagation. Three fundamental approaches to improving constraint propagation based on propagators as implementations of constraints are explored: keeping track of which propagators are at fixpoint, choosing which propagator to apply next, and how to combine several propagators for the same constraint. We show how idempotence reasoning and events help track fixpoints more accurately. We improve these methods by using them dynamically (taking into account current domains to improve accuracy). We define priority-based approaches to choosing a next propagator and show that dynamic priorities can improve propagation. We illustrate that the use of multiple propagators for the same constraint can be advantageous with priorities, and introduce staged propagators that combine the effects of multiple propagators with priorities for greater efficiency.


Decentralized Failure Diagnosis of Stochastic Discrete Event Systems

arXiv.org Artificial Intelligence

Recently, the diagnosability of {\it stochastic discrete event systems} (SDESs) was investigated in the literature, and, the failure diagnosis considered was {\it centralized}. In this paper, we propose an approach to {\it decentralized} failure diagnosis of SDESs, where the stochastic system uses multiple local diagnosers to detect failures and each local diagnoser possesses its own information. In a way, the centralized failure diagnosis of SDESs can be viewed as a special case of the decentralized failure diagnosis presented in this paper with only one projection. The main contributions are as follows: (1) We formalize the notion of codiagnosability for stochastic automata, which means that a failure can be detected by at least one local stochastic diagnoser within a finite delay. (2) We construct a codiagnoser from a given stochastic automaton with multiple projections, and the codiagnoser associated with the local diagnosers is used to test codiagnosability condition of SDESs. (3) We deal with a number of basic properties of the codiagnoser. In particular, a necessary and sufficient condition for the codiagnosability of SDESs is presented. (4) We give a computing method in detail to check whether codiagnosability is violated. And (5) some examples are described to illustrate the applications of the codiagnosability and its computing method.


Language, logic and ontology: uncovering the structure of commonsense knowledge

arXiv.org Artificial Intelligence

The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic 'discovery' of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the long-awaited goal of a meaning algebra.