Not enough data to create a plot.
Try a different view from the menu above.
Country
Real-time retrieval for case-based reasoning in interactive multiagent-based simulations
De Loor, Pierre, Bénard, Romain, Pierre, Chevaillier
The aim of this paper is to present the principles and results about case-based reasoning adapted to real- time interactive simulations, more precisely concerning retrieval mechanisms. The article begins by introducing the constraints involved in interactive multiagent-based simulations. The second section pre- sents a framework stemming from case-based reasoning by autonomous agents. Each agent uses a case base of local situations and, from this base, it can choose an action in order to interact with other auton- omous agents or users' avatars. We illustrate this framework with an example dedicated to the study of dynamic situations in football. We then go on to address the difficulties of conducting such simulations in real-time and propose a model for case and for case base. Using generic agents and adequate case base structure associated with a dedicated recall algorithm, we improve retrieval performance under time pressure compared to classic CBR techniques. We present some results relating to the performance of this solution. The article concludes by outlining future development of our project.
A Temporal Neuro-Fuzzy Monitoring System to Manufacturing Systems
Mahdaoui, Rafik, Mouss, Leila Hayet, Mouss, Mohamed Djamel, Chouhal, Ouahiba
Fault diagnosis and failure prognosis are essential techniques in improving the safety of many manufacturing systems. Therefore, on-line fault detection and isolation is one of the most important tasks in safety-critical and intelligent control systems. Computational intelligence techniques are being investigated as extension of the traditional fault diagnosis methods. This paper discusses the Temporal Neuro-Fuzzy Systems (TNFS) fault diagnosis within an application study of a manufacturing system. The key issues of finding a suitable structure for detecting and isolating ten realistic actuator faults are described. Within this framework, data-processing interactive software of simulation baptized NEFDIAG (NEuro Fuzzy DIAGnosis) version 1.0 is developed. This software devoted primarily to creation, training and test of a classification Neuro-Fuzzy system of industrial process failures. NEFDIAG can be represented like a special type of fuzzy perceptron, with three layers used to classify patterns and failures. The system selected is the workshop of SCIMAT clinker, cement factory in Algeria.
From decision to action : intentionality, a guide for the specification of intelligent agents' behaviour
De Loor, Pierre, Pierre-Alexandre, Favier
This article introduces a reflexion about behavioural specification for interactive and participative agent-based simulation in virtual reality. Within this context, it is neces sary to reach a high level of expressivness in order to enforce interactions between the designer and the behavioural model during the in-line prototyping. This requires to consider the need of semantic very early in the design process. The Intentional agent model is here exposed as a possible answer. It relies on a mixed imperative and declarative approach which focuses on the link between decision and action. The design of a tool able to simulate virtual environment implying agents based on this model is discuss
On Learning Discrete Graphical Models Using Greedy Methods
Jalali, Ali, Johnson, Chris, Ravikumar, Pradeep
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forward-backward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum node-degree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling as n = Omega(d^2 log(p)), in contrast to existing convex-optimization based algorithms that require a sample complexity of \Omega(d^3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end.
Astroinformatics of galaxies and quasars: a new general method for photometric redshifts estimation
Laurino, Omar, D'Abrusco, Raffaele, Longo, Giuseppe, Riccio, Giuseppe
With the availability of the huge amounts of data produced by current and future large multi-band photometric surveys, photometric redshifts have become a crucial tool for extragalactic astronomy and cosmology. In this paper we present a novel method, called Weak Gated Experts (WGE), which allows to derive photometric redshifts through a combination of data mining techniques. \noindent The WGE, like many other machine learning techniques, is based on the exploitation of a spectroscopic knowledge base composed by sources for which a spectroscopic value of the redshift is available. This method achieves a variance \sigma^2(\Delta z)=2.3x10^{-4} (\sigma^2(\Delta z) =0.08), where \Delta z = z_{phot} - z_{spec}) for the reconstruction of the photometric redshifts for the optical galaxies from the SDSS and for the optical quasars respectively, while the Root Mean Square (RMS) of the \Delta z variable distributions for the two experiments is respectively equal to 0.021 and 0.35. The WGE provides also a mechanism for the estimation of the accuracy of each photometric redshift. We also present and discuss the catalogs obtained for the optical SDSS galaxies, for the optical candidate quasars extracted from the DR7 SDSS photometric dataset {The sample of SDSS sources on which the accuracy of the reconstruction has been assessed is composed of bright sources, for a subset of which spectroscopic redshifts have been measured.}, and for optical SDSS candidate quasars observed by GALEX in the UV range. The WGE method exploits the new technological paradigm provided by the Virtual Observatory and the emerging field of Astroinformatics.
An Ontology-driven Framework for Supporting Complex Decision Process
The study proposes a framework of ONTOlogy-based Group Decision Support System (ONTOGDSS) for decision process which exhibits the complex structure of decision-problem and decision-group. It is capable of reducing the complexity of problem structure and group relations. The system allows decision makers to participate in group decision-making through the web environment, via the ontology relation. It facilitates the management of decision process as a whole, from criteria generation, alternative evaluation, and opinion interaction to decision aggregation. The embedded ontology structure in ONTOGDSS provides the important formal description features to facilitate decision analysis and verification. It examines the software architecture, the selection methods, the decision path, etc. Finally, the ontology application of this system is illustrated with specific real case to demonstrate its potentials towards decision-making development.
A Survey on how Description Logic Ontologies Benefit from Formal Concept Analysis
Although the notion of a concept as a collection of objects sharing certain properties, and the notion of a conceptual hierarchy are fundamental to both Formal Concept Analysis and Description Logics, the ways concepts are described and obtained differ significantly between these two research areas. Despite these differences, there have been several attempts to bridge the gap between these two formalisms, and attempts to apply methods from one field in the other. The present work aims to give an overview on the research done in combining Description Logics and Formal Concept Analysis.
Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function
Richtárik, Peter, Takáč, Martin
In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions the method converges linearly. This extends recent results of Nesterov [Efficiency of coordinate descent methods on huge-scale optimization problems, CORE Discussion Paper #2010/2], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized least squares and support vector machine problems with a billion variables.
Linear Latent Force Models using Gaussian Processes
Álvarez, Mauricio A., Luengo, David, Lawrence, Neil D.
Purely data driven approaches for machine learning present difficulties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data driven modelling with a physical model of the system. We show how different, physically-inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology and geostatistics.
Fast Learning Rate of lp-MKL and its Minimax Optimality
In this paper, we give a new sharp generalization bound of lp-MKL which is a generalized framework of multiple kernel learning (MKL) and imposes lp-mixed-norm regularization instead of l1-mixed-norm regularization. We utilize localization techniques to obtain the sharp learning rate. The bound is characterized by the decay rate of the eigenvalues of the associated kernels. A larger decay rate gives a faster convergence rate. Furthermore, we give the minimax learning rate on the ball characterized by lp-mixed-norm in the product space. Then we show that our derived learning rate of lp-MKL achieves the minimax optimal rate on the lp-mixed-norm ball.