Plotting

 Country


Fixpoints and Iterated Updates in Abstract Argumentation

AAAI Conferences

Fixpoints play a key role in the mathematical set up of abstract argumentation theory but, we argue, have been relatively underexamined in the literature. The paper studies the logical structure underlying the computation via approximation sequences of the sort of fixpoints relevant in argumentation. Concretely, it presents a number of novel results on the fixed point theory underpinning the main Dung's semantics and, inspired by recent literature on the logical analysis of equilibrium computation in games, it provides a characterization of those semantics in terms of iterated model updates.


Assertion Absorption in Object Queries over Knowledge Bases

AAAI Conferences

We develop a novel absorption technique for large collections of factual assertions about individual objects. These assertions are commonly accompanied by implicit background knowledge and form a knowledge base. Both the assertions and the background knowledge are expressed in a suitable language of Description Logic and queries over such knowledge bases can be expressed as assertion retrieval queries. The proposed absorption technique significantly improves the performance of such queries, in particular in cases where a large number of object features are known for the objects represented in such a knowledge base. In addition to the absorption technique we present the results of a preliminary experimental evaluation that validates the efficacy of the proposed optimization.


Specifying and Reasoning with Underspecified Knowledge Bases Using Answer Set Programming

AAAI Conferences

A large and complex knowledge base that models some aspect of the real world can rarely be fully specified. Two examples of such underspecification are that (i) some of the cardinality constraints are omitted; (ii) some properties of all individual instances of a class are specialized across a class hierarchy, but specific references to which particular values are specialized are omitted. Such knowledge bases are of great practical interest as they are the basis of an empirically tested knowledge acquisition system that has been used to construct a knowledge base from a significant portion of a biology textbook. In this paper, we formalize an underspecified knowledge base using answer set programming, and give a set of rules called UMAP that support inheritance reasoning in such a knowledge base.


Horn Belief Contraction: Remainders, Envelopes and Complexity

AAAI Conferences

Belief change studies how to update knowledge bases used for reasoning. Traditionally belief revision has been based on full propositional logic. However, reasoning with full propositional knowledge bases is computationally hard, whereas reasoning with Horn knowledge bases is fast. In the past several years, there has been considerable work in belief revision theory on developing a theory of belief contraction for knowledge represented in Horn form. Our main focus here is the computational complexity of belief contraction, and, in particular, of various methods and approaches suggested in the literature. This is a natural and important question, especially in connection with one of the primary motivations for considering Horn representation: efficiency. The problems considered lead to questions about Horn envelopes (or Horn LUBs), introduced earlier in the context of knowledge compilation. This work gives a syntactic characterization of the remainders of a Horn belief set with respect to a consequence to be contracted, as the Horn envelopes of the belief set and an elementary conjunction corresponding to a truth assignment satisfying a certain explicitly given formula. This gives an efficient algorithm to generate all remainders, each represented by a truth assignment. On the negative side, examples are given of Horn belief sets and consequences where Horn formulas representing the result of contraction, based either on remainders or on weak remainders, must have exponential size for almost all possible choice functions (i.e., different possible choices of partial meet contraction). Therefore using the Horn framework for belief contraction does not by itself give us computational efficiency. Further work is required to explore the possibilities for efficient belief change methods.


Information Forests

arXiv.org Machine Learning

We describe Information Forests, an approach to classification that generalizes Random Forests by replacing the splitting criterion of non-leaf nodes from a discriminative one -- based on the entropy of the label distribution -- to a generative one -- based on maximizing the information divergence between the class-conditional distributions in the resulting partitions. The basic idea consists of deferring classification until a measure of "classification confidence" is sufficiently high, and instead breaking down the data so as to maximize this measure. In an alternative interpretation, Information Forests attempt to partition the data into subsets that are "as informative as possible" for the purpose of the task, which is to classify the data. Classification confidence, or informative content of the subsets, is quantified by the Information Divergence. Our approach relates to active learning, semi-supervised learning, mixed generative/discriminative learning.


Algebraic Geometric Comparison of Probability Distributions

arXiv.org Machine Learning

We propose a novel algebraic framework for treating probability distributions represented by their cumulants such as the mean and covariance matrix. As an example, we consider the unsupervised learning problem of finding the subspace on which several probability distributions agree. Instead of minimizing an objective function involving the estimated cumulants, we show that by treating the cumulants as elements of the polynomial ring we can directly solve the problem, at a lower computational cost and with higher accuracy. Moreover, the algebraic viewpoint on probability distributions allows us to invoke the theory of Algebraic Geometry, which we demonstrate in a compact proof for an identifiability criterion.


Modification of the Elite Ant System in Order to Avoid Local Optimum Points in the Traveling Salesman Problem

arXiv.org Artificial Intelligence

This article presents a new algorithm which is a modified version of the elite ant system (EAS) algorithm. The new version utilizes an effective criterion for escaping from the local optimum points. In contrast to the classical EAC algorithms, the proposed algorithm uses only a global updating, which will increase pheromone on the edges of the best (i.e. the shortest) route and will at the same time decrease the amount of pheromone on the edges of the worst (i.e. the longest) route. In order to assess the efficiency of the new algorithm, some standard traveling salesman problems (TSPs) were studied and their results were compared with classical EAC and other well-known meta-heuristic algorithms. The results indicate that the proposed algorithm has been able to improve the efficiency of the algorithms in all instances and it is competitive with other algorithms.


Optimization in SMT with LA(Q) Cost Functions

arXiv.org Artificial Intelligence

In the contexts of automated reasoning and formal verification, important decision problems are effectively encoded into Satisfiability Modulo Theories (SMT). In the last decade efficient SMT solvers have been developed for several theories of practical interest (e.g., linear arithmetic, arrays, bit-vectors). Surprisingly, very few work has been done to extend SMT to deal with optimization problems; in particular, we are not aware of any work on SMT solvers able to produce solutions which minimize cost functions over arithmetical variables. This is unfortunate, since some problems of interest require this functionality. In this paper we start filling this gap. We present and discuss two general procedures for leveraging SMT to handle the minimization of LA(Q) cost functions, combining SMT with standard minimization techniques. We have implemented the proposed approach within the MathSAT SMT solver. Due to the lack of competitors in AR and SMT domains, we experimentally evaluated our implementation against state-of-the-art tools for the domain of linear generalized disjunctive programming (LGDP), which is closest in spirit to our domain, on sets of problems which have been previously proposed as benchmarks for the latter tools. The results show that our tool is very competitive with, and often outperforms, these tools on these problems, clearly demonstrating the potential of the approach.


Minimax Rates of Estimation for Sparse PCA in High Dimensions

arXiv.org Machine Learning

We study sparse principal components analysis in the high-dimensional setting, where $p$ (the number of variables) can be much larger than $n$ (the number of observations). We prove optimal, non-asymptotic lower and upper bounds on the minimax estimation error for the leading eigenvector when it belongs to an $\ell_q$ ball for $q \in [0,1]$. Our bounds are sharp in $p$ and $n$ for all $q \in [0, 1]$ over a wide class of distributions. The upper bound is obtained by analyzing the performance of $\ell_q$-constrained PCA. In particular, our results provide convergence rates for $\ell_1$-constrained PCA.


Beta-Negative Binomial Process and Poisson Factor Analysis

arXiv.org Machine Learning

A beta-negative binomial (BNB) process is proposed, leading to a beta-gamma-Poisson process, which may be viewed as a "multi-scoop" generalization of the beta-Bernoulli process. The BNB process is augmented into a beta-gamma-gamma-Poisson hierarchical structure, and applied as a nonparametric Bayesian prior for an infinite Poisson factor analysis model. A finite approximation for the beta process Levy random measure is constructed for convenient implementation. Efficient MCMC computations are performed with data augmentation and marginalization techniques. Encouraging results are shown on document count matrix factorization.