Plotting

 Technology


A Minimum Description Length Approach to Multitask Feature Selection

arXiv.org Artificial Intelligence

Many regression problems involve not one but several response variables (y's). Often the responses are suspected to share a common underlying structure, in which case it may be advantageous to share information across them; this is known as multitask learning. As a special case, we can use multiple responses to better identify shared predictive features -- a project we might call multitask feature selection. This thesis is organized as follows. Section 1 introduces feature selection for regression, focusing on ell_0 regularization methods and their interpretation within a Minimum Description Length (MDL) framework. Section 2 proposes a novel extension of MDL feature selection to the multitask setting. The approach, called the "Multiple Inclusion Criterion" (MIC), is designed to borrow information across regression tasks by more easily selecting features that are associated with multiple responses. We show in experiments on synthetic and real biological data sets that MIC can reduce prediction error in settings where features are at least partially shared across responses. Section 3 surveys hypothesis testing by regression with a single response, focusing on the parallel between the standard Bonferroni correction and an MDL approach. Mirroring the ideas in Section 2, Section 4 proposes a novel MIC approach to hypothesis testing with multiple responses and shows that on synthetic data with significant sharing of features across responses, MIC sometimes outperforms standard FDR-controlling methods in terms of finding true positives for a given level of false positives. Section 5 concludes.


I, Quantum Robot: Quantum Mind control on a Quantum Computer

arXiv.org Artificial Intelligence

The logic which describes quantum robots is not orthodox quantum logic, but a deductive calculus which reproduces the quantum tasks (computational processes, and actions) taking into account quantum superposition and quantum entanglement. A way toward the realization of intelligent quantum robots is to adopt a quantum metalanguage to control quantum robots. A physical implementation of a quantum metalanguage might be the use of coherent states in brain signals.


Considerations on Construction Ontologies

arXiv.org Artificial Intelligence

The paper proposes an analysis on some existent ontologies, in order to point out ways to resolve semantic heterogeneity in information systems. Authors are highlighting the tasks in a Knowledge Acquisiton System and identifying aspects related to the addition of new information to an intelligent system. A solution is proposed, as a combination of ontology reasoning services and natural languages generation. A multi-agent system will be conceived with an extractor agent, a reasoner agent and a competence management agent.


Mining Generalized Patterns from Large Databases using Ontologies

arXiv.org Artificial Intelligence

Formal Concept Analysis (FCA) is a mathematical theory based on the formalization of the notions of concept and concept hierarchies. It has been successfully applied to several Computer Science fields such as data mining,software engineering, and knowledge engineering, and in many domains like medicine, psychology, linguistics and ecology. For instance, it has been exploited for the design, mapping and refinement of ontologies. In this paper, we show how FCA can benefit from a given domain ontology by analyzing the impact of a taxonomy (on objects and/or attributes) on the resulting concept lattice. We willmainly concentrate on the usage of a taxonomy to extract generalized patterns (i.e., knowledge generated from data when elements of a given domain ontology are used) in the form of concepts and rules, and improve navigation through these patterns. To that end, we analyze three generalization cases and show their impact on the size of the generalized pattern set. Different scenarios of simultaneous generalizations on both objects and attributes are also discussed


Automating Quantified Multimodal Logics in Simple Type Theory -- A Case Study

arXiv.org Artificial Intelligence

In a case study we investigate whether off the shelf higher-order theorem provers and model generators can be employed to automate reasoning in and about quantified multimodal logics. In our experiments we exploit the new TPTP infrastructure for classical higher-order logic.


Hiding Quiet Solutions in Random Constraint Satisfaction Problems

arXiv.org Artificial Intelligence

We study constraint satisfaction problems on the so-called 'planted' random ensemble. We show that for a certain class of problems, e.g. graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions, and the easy/hard/easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid/glass/solid phenomenology.


Information Modeling for a Dynamic Representation of an Emergency Situation

arXiv.org Artificial Intelligence

In this paper we propose an approach to build a decision support system that can help emergency planners and responders to detect and manage emergency situations. The internal mechanism of the system is independent from the treated application. Therefore, we think the system may be used or adapted easily to different case studies. We focus here on a first step in the decision-support process which concerns the modeling of information issued from the perceived environment and their representation dynamically using a multiagent system. This modeling was applied on the RoboCupRescue Simulation System. An implementation and some results are presented here.


Characterizing predictable classes of processes

arXiv.org Artificial Intelligence

The problem is sequence prediction in the following setting. A sequence $x_1,...,x_n,...$ of discrete-valued observations is generated according to some unknown probabilistic law (measure) $\mu$. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure $\mu$ belongs to an arbitrary class $\C$ of stochastic processes. We are interested in predictors $\rho$ whose conditional probabilities converge to the "true" $\mu$-conditional probabilities if any $\mu\in\C$ is chosen to generate the data. We show that if such a predictor exists, then a predictor can also be obtained as a convex combination of a countably many elements of $\C$. In other words, it can be obtained as a Bayesian predictor whose prior is concentrated on a countable set. This result is established for two very different measures of performance of prediction, one of which is very strong, namely, total variation, and the other is very weak, namely, prediction in expected average Kullback-Leibler divergence.


Approximate inference on planar graphs using Loop Calculus and Belief Propagation

arXiv.org Artificial Intelligence

We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006) allows to express the exact partition function of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in (Certkov et al., 2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze the performance of the algorithm for the partition function approximation for models with binary variables and pairwise interactions on grids and other planar graphs. We study in detail both the loop series and the equivalent Pfaffian series and show that the first term of the Pfaffian series for the general, intractable planar model, can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.


Granularity-Adaptive Proof Presentation

arXiv.org Artificial Intelligence

When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these presentations are neither intended nor suitable for human use. A challenge therefore is to develop user- and goal-adaptive proof presentation techniques that obey common mathematical practice. We present a flexible and adaptive approach to proof presentation that exploits machine learning techniques to extract a model of the specific granularity of proof examples and employs this model for the automated generation of further proofs at an adapted level of granularity.