Not enough data to create a plot.
Try a different view from the menu above.
arXiv.org Artificial Intelligence
Qualitative Belief Conditioning Rules (QBCR)
Smarandache, Florentin, Dezert, Jean
In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.
Compositional Semantics Grounded in Commonsense Metaphysics
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the semantics of various natural language phenomena may become nearly trivial.
2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years
When Kurt Goedel layed the foundations of theoretical computer science in 1931, he also introduced essential concepts of the theory of Artificial Intelligence (AI). Although much of subsequent AI research has focused on heuristics, which still play a major role in many practical AI applications, in the new millennium AI theory has finally become a full-fledged formal science, with important optimality results for embodied agents living in unknown environments, obtained through a combination of theory a la Goedel and probability theory. Here we look back at important milestones of AI history, mention essential recent results, and speculate about what we may expect from the next 25 years, emphasizing the significance of the ongoing dramatic hardware speedups, and discussing Goedel-inspired, self-referential, self-improving universal problem solvers.
Raising a Hardness Result
This article presents a technique for proving problems hard for classes of the polynomial hierarchy or for PSPACE. The rationale of this technique is that some problem restrictions are able to simulate existential or universal quantifiers. If this is the case, reductions from Quantified Boolean Formulae (QBF) to these restrictions can be transformed into reductions from QBFs having one more quantifier in the front. This means that a proof of hardness of a problem at level n in the polynomial hierarchy can be split into n separate proofs, which may be simpler than a proof directly showing a reduction from a class of QBFs to the considered problem.
Bayesian Approach to Neuro-Rough Models
Marwala, Tshilidzi, Crossingham, Bodie
This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Remarks on Inheritance Systems
We try a conceptual analysis of inheritance diagrams, first in abstract terms, and then compare to "normality" and the "small/big sets" of preferential and related reasoning. The main ideas are about nodes as truth values and information sources, truth comparison by paths, accessibility or relevance of information by paths, relative normality, and prototypical reasoning.
A structure from motion inequality
Knill, Oliver, Ramirez-Herran, Jose
We state an elementary inequality for the structure from motion problem for m cameras and n points. This structure from motion inequality relates space dimension, camera parameter dimension, the number of cameras and number points and global symmetry properties and provides a rigorous criterion for which reconstruction is not possible with probability 1. Mathematically the inequality is based on Frobenius theorem which is a geometric incarnation of the fundamental theorem of linear algebra. The paper also provides a general mathematical formalism for the structure from motion problem. It includes the situation the points can move while the camera takes the pictures.
Space and camera path reconstruction for omni-directional vision
Knill, Oliver, Ramirez-Herran, Jose
In this paper, we address the inverse problem of reconstructing a scene as well as the camera motion from the image sequence taken by an omni-directional camera. Our structure from motion results give sharp conditions under which the reconstruction is unique. For example, if there are three points in general position and three omni-directional cameras in general position, a unique reconstruction is possible up to a similarity. We then look at the reconstruction problem with m cameras and n points, where n and m can be large and the over-determined system is solved by least square methods. The reconstruction is robust and generalizes to the case of a dynamic environment where landmarks can move during the movie capture. Possible applications of the result are computer assisted scene reconstruction, 3D scanning, autonomous robot navigation, medical tomography and city reconstructions.
On Ullman's theorem in computer vision
Knill, Oliver, Ramirez-Herran, Jose
Both in the plane and in space, we invert the nonlinear Ullman transformation for 3 points and 3 orthographic cameras. While Ullman's theorem assures a unique reconstruction modulo a reflection for 3 cameras and 4 points, we find a locally unique reconstruction for 3 cameras and 3 points. Explicit reconstruction formulas allow to decide whether picture data of three cameras seeing three points can be realized as a point-camera configuration.
Solving the subset-sum problem with a light-based device
We propose a special computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants).