Not enough data to create a plot.
Try a different view from the menu above.
Country
Computer Model of a "Sense of Humour". I. General Algorithm
A computer model of a "sense of humour" is proposed. The humorous effect is interpreted as a specific malfunction in the course of information processing due to the need for the rapid deletion of the false version transmitted into consciousness. The biological function of a sense of humour consists in speeding up the bringing of information into consciousness and in fuller use of the resources of the brain.
Building Rules on Top of Ontologies for the Semantic Web with Inductive Logic Programming
Building rules on top of ontologies is the ultimate goal of the logical layer of the Semantic Web. To this aim an ad-hoc mark-up language for this layer is currently under discussion. It is intended to follow the tradition of hybrid knowledge representation and reasoning systems such as $\mathcal{AL}$-log that integrates the description logic $\mathcal{ALC}$ and the function-free Horn clausal language \textsc{Datalog}. In this paper we consider the problem of automating the acquisition of these rules for the Semantic Web. We propose a general framework for rule induction that adopts the methodological apparatus of Inductive Logic Programming and relies on the expressive and deductive power of $\mathcal{AL}$-log. The framework is valid whatever the scope of induction (description vs. prediction) is. Yet, for illustrative purposes, we also discuss an instantiation of the framework which aims at description and turns out to be useful in Ontology Refinement. Keywords: Inductive Logic Programming, Hybrid Knowledge Representation and Reasoning Systems, Ontologies, Semantic Web. Note: To appear in Theory and Practice of Logic Programming (TPLP)
Getting started in probabilistic graphical models
Probabilistic graphical models (PGMs) have become a popular tool for computational analysis of biological data in a variety of domains. But, what exactly are they and how do they work? How can we use PGMs to discover patterns that are biologically relevant? And to what extent can PGMs help us formulate new hypotheses that are testable at the bench? This note sketches out some answers and illustrates the main ideas behind the statistical approach to biological pattern discovery.
Optimal Solutions for Sparse Principal Component Analysis
d'Aspremont, Alexandre, Bach, Francis, Ghaoui, Laurent El
Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. We formulate a new semidefinite relaxation to this problem and derive a greedy algorithm that computes a full set of good solutions for all target numbers of non zero coefficients, with total complexity O(n^3), where n is the number of variables. We then use the same relaxation to derive sufficient conditions for global optimality of a solution, which can be tested in O(n^3) per pattern. We discuss applications in subset selection and sparse recovery and show on artificial examples and biological data that our algorithm does provide globally optimal solutions in many cases.
Semantic results for ontic and epistemic change
van Ditmarsch, H. P., Kooi, B. P.
We give some semantic results for an epistemic logic incorporating dynamic operators to describe information changing events. Such events include epistemic changes, where agents become more informed about the non-changing state of the world, and ontic changes, wherein the world changes. The events are executed in information states that are modeled as pointed Kripke models. Our contribution consists of three semantic results. (i) Given two information states, there is an event transforming one into the other. The linguistic correspondent to this is that every consistent formula can be made true in every information state by the execution of an event. (ii) A more technical result is that: every event corresponds to an event in which the postconditions formalizing ontic change are assignments to `true' and `false' only (instead of assignments to arbitrary formulas in the logical language). `Corresponds' means that execution of either event in a given information state results in bisimilar information states. (iii) The third, also technical, result is that every event corresponds to a sequence of events wherein all postconditions are assignments of a single atom only (instead of simultaneous assignments of more than one atom).
Supervised Machine Learning with a Novel Pointwise Density Estimator
Oyang, Yen-Jen, Chen, Chien-Yu, Chang, Darby Tien-Hao, Wu, Chih-Peng
This article proposes a novel density estimation based algorithm for carrying out supervised machine learning. The proposed algorithm features O(n) time complexity for generating a classifier, where n is the number of sampling instances in the training dataset. This feature is highly desirable in contemporary applications that involve large and still growing databases. In comparison with the kernel density estimation based approaches, the mathe-matical fundamental behind the proposed algorithm is not based on the assump-tion that the number of training instances approaches infinite. As a result, a classifier generated with the proposed algorithm may deliver higher prediction accuracy than the kernel density estimation based classifier in some cases.
Discriminated Belief Propagation
Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase.
Computational Intelligence Characterization Method of Semiconductor Device
Liau, Eric, Schmitt-Landsiedel, Doris
Characterization of semiconductor devices is used to gather as much data about the device as possible to determine weaknesses in design or trends in the manufacturing process. In this paper, we propose a novel multiple trip point characterization concept to overcome the constraint of single trip point concept in device characterization phase. In addition, we use computational intelligence techniques (e.g. neural network, fuzzy and genetic algorithm) to further manipulate these sets of multiple trip point values and tests based on semiconductor test equipments, Our experimental results demonstrate an excellent design parameter variation analysis in device characterization phase, as well as detection of a set of worst case tests that can provoke the worst case variation, while traditional approach was not capable of detecting them.
Analyzing covert social network foundation behind terrorism disaster
Maeno, Yoshiharu, Ohsawa, Yukio
This paper addresses a method to analyze the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory, and computational data processing. The social network responsible for the 9/11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method.
Simultaneous adaptation to the margin and to complexity in classification
We consider the problem of adaptation to the margin and to complexity in binary classification. We suggest an exponential weighting aggregation scheme. We use this aggregation procedure to construct classifiers which adapt automatically to margin and complexity. Two main examples are worked out in which adaptivity is achieved in frameworks proposed by Steinwart and Scovel [Learning Theory. Lecture Notes in Comput. Sci. 3559 (2005) 279--294. Springer, Berlin; Ann. Statist. 35 (2007) 575--607] and Tsybakov [Ann. Statist. 32 (2004) 135--166]. Adaptive schemes, like ERM or penalized ERM, usually involve a minimization step. This is not the case for our procedure.