Plotting

 Country


Efficient Tabling Mechanisms for Transaction Logic Programs

arXiv.org Artificial Intelligence

In this paper we present efficient evaluation algorithms for the Horn Transaction Logic (a generalization of the regular Horn logic programs with state updates). We present two complementary methods for optimizing the implementation of Transaction Logic. The first method is based on tabling and we modified the proof theory to table calls and answers on states (practically, equivalent to dynamic programming). The call-answer table is indexed on the call and a signature of the state in which the call was made. The answer columns contain the answer unification and a signature of the state after the call was executed. The states are signed efficiently using a technique based on tries and counting. The second method is based on incremental evaluation and it applies when the data oracle contains derived relations. The deletions and insertions (executed in the transaction oracle) change the state of the database. Using the heuristic of inertia (only a part of the state changes in response to elementary updates), most of the time it is cheaper to compute only the changes in the state than to recompute the entire state from scratch. The two methods are complementary by the fact that the first method optimizes the evaluation when a call is repeated in the same state, and the second method optimizes the evaluation of a new state when a call-state pair is not found by the tabling mechanism (i.e. the first method). The proof theory of Transaction Logic with the application of tabling and incremental evaluation is sound and complete with respect to its model theory.


On Universal Prediction and Bayesian Confirmation

arXiv.org Machine Learning

The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Strong total and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.


Multi-Sensor Fusion Method using Dynamic Bayesian Network for Precise Vehicle Localization and Road Matching

arXiv.org Artificial Intelligence

This paper presents a multi-sensor fusion strategy for a novel road-matching method designed to support real-time navigational features within advanced driving-assistance systems. Managing multihypotheses is a useful strategy for the road-matching problem. The multi-sensor fusion and multi-modal estimation are realized using Dynamical Bayesian Network. Experimental results, using data from Antilock Braking System (ABS) sensors, a differential Global Positioning System (GPS) receiver and an accurate digital roadmap, illustrate the performances of this approach, especially in ambiguous situations.


Simple Algorithmic Principles of Discovery, Subjective Beauty, Selective Attention, Curiosity & Creativity

arXiv.org Artificial Intelligence

I postulate that human or other intelligent agents function or should function as follows. They store all sensory observations as they come - the data is holy. At any time, given some agent's current coding capabilities, part of the data is compressible by a short and hopefully fast program / description / explanation / world model. In the agent's subjective eyes, such data is more regular and more "beautiful" than other data. It is well-known that knowledge of regularity and repeatability may improve the agent's ability to plan actions leading to external rewards. In absence of such rewards, however, known beauty is boring. Then "interestingness" becomes the first derivative of subjective beauty: as the learning agent improves its compression algorithm, formerly apparently random data parts become subjectively more regular and beautiful. Such progress in compressibility is measured and maximized by the curiosity drive: create action sequences that extend the observation history and yield previously unknown / unpredictable but quickly learnable algorithmic regularity. We discuss how all of the above can be naturally implemented on computers, through an extension of passive unsupervised learning to the case of active data selection: we reward a general reinforcement learner (with access to the adaptive compressor) for actions that improve the subjective compressibility of the growing data. An unusually large breakthrough in compressibility deserves the name "discovery". The "creativity" of artists, dancers, musicians, pure mathematicians can be viewed as a by-product of this principle. Several qualitative examples support this hypothesis.


Qualitative Belief Conditioning Rules (QBCR)

arXiv.org Artificial Intelligence

In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.


Compositional Semantics Grounded in Commonsense Metaphysics

arXiv.org Artificial Intelligence

We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the semantics of various natural language phenomena may become nearly trivial.


2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years

arXiv.org Artificial Intelligence

When Kurt Goedel layed the foundations of theoretical computer science in 1931, he also introduced essential concepts of the theory of Artificial Intelligence (AI). Although much of subsequent AI research has focused on heuristics, which still play a major role in many practical AI applications, in the new millennium AI theory has finally become a full-fledged formal science, with important optimality results for embodied agents living in unknown environments, obtained through a combination of theory a la Goedel and probability theory. Here we look back at important milestones of AI history, mention essential recent results, and speculate about what we may expect from the next 25 years, emphasizing the significance of the ongoing dramatic hardware speedups, and discussing Goedel-inspired, self-referential, self-improving universal problem solvers.


Raising a Hardness Result

arXiv.org Artificial Intelligence

This article presents a technique for proving problems hard for classes of the polynomial hierarchy or for PSPACE. The rationale of this technique is that some problem restrictions are able to simulate existential or universal quantifiers. If this is the case, reductions from Quantified Boolean Formulae (QBF) to these restrictions can be transformed into reductions from QBFs having one more quantifier in the front. This means that a proof of hardness of a problem at level n in the polynomial hierarchy can be split into n separate proofs, which may be simpler than a proof directly showing a reduction from a class of QBFs to the considered problem.


Bayesian Approach to Neuro-Rough Models

arXiv.org Artificial Intelligence

This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.


Remarks on Inheritance Systems

arXiv.org Artificial Intelligence

We try a conceptual analysis of inheritance diagrams, first in abstract terms, and then compare to "normality" and the "small/big sets" of preferential and related reasoning. The main ideas are about nodes as truth values and information sources, truth comparison by paths, accessibility or relevance of information by paths, relative normality, and prototypical reasoning.