Plotting

 Country


The SeaLion has Landed: An IDE for Answer-Set Programming---Preliminary Report

arXiv.org Artificial Intelligence

We report about the current state and designated features of the tool SeaLion, aimed to serve as an integrated development environment (IDE) for answer-set programming (ASP). A main goal of SeaLion is to provide a user-friendly environment for supporting a developer to write, evaluate, debug, and test answer-set programs. To this end, new support techniques have to be developed that suit the requirements of the answer-set semantics and meet the constraints of practical applicability. In this respect, SeaLion benefits from the research results of a project on methods and methodologies for answer-set program development in whose context SeaLion is realised. Currently, the tool provides source-code editors for the languages of Gringo and DLV that offer syntax highlighting, syntax checking, and a visual program outline. Further implemented features are support for external solvers and visualisation as well as visual editing of answer sets. SeaLion comes as a plugin of the popular Eclipse platform and provides itself interfaces for future extensions of the IDE.


Instant Replay: Investigating statistical Analysis in Sports

arXiv.org Artificial Intelligence

Technology has had an unquestionable impact on the way people watch sports. Along with this technological evolution has come a higher standard to ensure a good viewing experience for the casual sports fan. It can be argued that the pervasion of statistical analysis in sports serves to satiate the fan's desire for detailed sports statistics. The goal of statistical analysis in sports is a simple one: to eliminate subjective analysis. In this paper, we review previous work that attempts to analyze various aspects in sports by using ideas from Markov Chains, Bayesian Inference and Markov Chain Monte Carlo (MCMC) methods. The unifying goal of these works is to achieve an accurate representation of the player's ability, the sport, or the environmental effects on the player's performance. With the prevalence of cheap computation, it is possible that using techniques in Artificial Intelligence could improve the result of statistical analysis in sport. This is best illustrated when evaluating football using Neuro Dynamic Programming, a Control Theory paradigm heavily based on theory in Stochastic processes. The results from this method suggest that statistical analysis in sports may benefit from using ideas from the area of Control Theory or Machine Learning


Bi-modal G\"odel logic over [0,1]-valued Kripke frames

arXiv.org Artificial Intelligence

We consider the G\"odel bi-modal logic determined by fuzzy Kripke models where both the propositions and the accessibility relation are infinitely valued over the standard G\"odel algebra [0,1] and prove strong completeness of Fischer Servi intuitionistic modal logic IK plus the prelinearity axiom with respect to this semantics. We axiomatize also the bi-modal analogues of $T,$ $S4,$ and $S5$ obtained by restricting to models over frames satisfying the [0,1]-valued versions of the structural properties which characterize these logics. As application of the completeness theorems we obtain a representation theorem for bi-modal G\"odel algebras.


Kara: A System for Visualising and Visual Editing of Interpretations for Answer-Set Programs

arXiv.org Artificial Intelligence

In answer-set programming (ASP), the solutions of a problem are encoded in dedicated models, called answer sets, of a logical theory. These answer sets are computed from the program that represents the theory by means of an ASP solver and returned to the user as sets of ground first-order literals. As this type of representation is often cumbersome for the user to interpret, tools like ASPVIZ and IDPDraw were developed that allow for visualising answer sets. The tool Kara, introduced in this paper, follows these approaches, using ASP itself as a language for defining visualisations of interpretations. Unlike existing tools that position graphic primitives according to static coordinates only, Kara allows for more high-level specifications, supporting graph structures, grids, and relative positioning of graphical elements. Moreover, generalising the functionality of previous tools, Kara provides modifiable visualisations such that interpretations can be manipulated by graphically editing their visualisations. This is realised by resorting to abductive reasoning techniques. Kara is part of SeaLion, a forthcoming integrated development environment (IDE) for ASP.


Performance Bounds for Lambda Policy Iteration and Application to the Game of Tetris

arXiv.org Artificial Intelligence

We consider the discrete-time infinite-horizon optimal control problem formalized by Markov Decision Processes. We revisit the work of Bertsekas and Ioffe, that introduced $\lambda$ Policy Iteration, a family of algorithms parameterized by $\lambda$ that generalizes the standard algorithms Value Iteration and Policy Iteration, and has some deep connections with the Temporal Differences algorithm TD($\lambda$) described by Sutton and Barto. We deepen the original theory developped by the authors by providing convergence rate bounds which generalize standard bounds for Value Iteration described for instance by Puterman. Then, the main contribution of this paper is to develop the theory of this algorithm when it is used in an approximate form and show that this is sound. Doing so, we extend and unify the separate analyses developped by Munos for Approximate Value Iteration and Approximate Policy Iteration. Eventually, we revisit the use of this algorithm in the training of a Tetris playing controller as originally done by Bertsekas and Ioffe. We provide an original performance bound that can be applied to such an undiscounted control problem. Our empirical results are different from those of Bertsekas and Ioffe (which were originally qualified as "paradoxical" and "intriguing"), and much more conform to what one would expect from a learning experiment. We discuss the possible reason for such a difference.


Average Interpolating Wavelets on Point Clouds and Graphs

arXiv.org Machine Learning

We introduce a new wavelet transform suitable for analyzing functions on point clouds and graphs. Our construction is based on a generalization of the average interpolating refinement scheme of Donoho. The most important ingredient of the original scheme that needs to be altered is the choice of the interpolant. Here, we define the interpolant as the minimizer of a smoothness functional, namely a generalization of the Laplacian energy, subject to the averaging constraints. In the continuous setting, we derive a formula for the optimal solution in terms of the poly-harmonic Green's function. The form of this solution is used to motivate our construction in the setting of graphs and point clouds. We highlight the empirical convergence of our refinement scheme and the potential applications of the resulting wavelet transform through experiments on a number of data stets.


Model Selection Consistency for Cointegrating Regressions

arXiv.org Machine Learning

We study the asymptotic properties of the adaptive Lasso in cointegration regressions in the case where all covariates are weakly exogenous. We assume the number of candidate I(1) variables is sub-linear with respect to the sample size (but possibly larger) and the number of candidate I(0) variables is polynomial with respect to the sample size. We show that, under classical conditions used in cointegration analysis, this estimator asymptotically chooses the correct subset of variables in the model and its asymptotic distribution is the same as the distribution of the OLS estimate given the variables in the model were known in beforehand (oracle property). We also derive an algorithm based on the local quadratic approximation and present a numerical study to show the adequacy of the method in finite samples.


Open Source Software: How Can Design Metrics Facilitate Architecture Recovery?

arXiv.org Artificial Intelligence

Modern software development methodologies include reuse of open source code. Reuse can be facilitated by architectural knowledge of the software, not necessarily provided in the documentation of open source software. The effort required to comprehend the system's source code and discover its architecture can be considered a major drawback in reuse. In a recent study we examined the correlations between design metrics and classes' architecture layer. In this paper, we apply our methodology in more open source projects to verify the applicability of our method. Keywords: system understanding; program comprehension; object-oriented; reuse; architecture layer; design metrics;


Consistency and Random Constraint Satisfaction Models

arXiv.org Artificial Intelligence

In this paper, we study the possibility of designing non-trivial random CSP models by exploiting the intrinsic connection between structures and typical-case hardness. We show that constraint consistency, a notion that has been developed to improve the efficiency of CSP algorithms, is in fact the key to the design of random CSP models that have interesting phase transition behavior and guaranteed exponential resolution complexity without putting much restriction on the parameter of constraint tightness or the domain size of the problem. We propose a very flexible framework for constructing problem instances withinteresting behavior and develop a variety of concrete methods to construct specific random CSP models that enforce different levels of constraint consistency. A series of experimental studies with interesting observations are carried out to illustrate the effectiveness of introducing structural elements in random instances, to verify the robustness of our proposal, and to investigate features of some specific models based on our framework that are highly related to the behavior of backtracking search algorithms.


The Generalized A* Architecture

arXiv.org Artificial Intelligence

We consider the problem of computing a lightest derivation of a global structure using a set of weighted rules. A large variety of inference problems in AI can be formulated in this framework. We generalize A* search and heuristics derived from abstractions to a broad class of lightest derivation problems. We also describe a new algorithm that searches for lightest derivations using a hierarchy of abstractions. Our generalization of A* gives a new algorithm for searching AND/OR graphs in a bottom-up fashion. We discuss how the algorithms described here provide a general architecture for addressing the pipeline problem --- the problem of passing information back and forth between various stages of processing in a perceptual system. We consider examples in computer vision and natural language processing. We apply the hierarchical search algorithm to the problem of estimating the boundaries of convex objects in grayscale images and compare it to other search methods. A second set of experiments demonstrate the use of a new compositional model for finding salient curves in images.