Plotting

 Country


Modelling Combinatorial Auctions in Linear Logic

AAAI Conferences

We show that linear logic can serve as an expressive framework in which to model a rich variety of combinatorial auction mechanisms. Due to its resource-sensitive nature, linear logic can easily represent bids in combinatorial auctions in which goods may be sold in multiple units, and we show how it naturally generalises several bidding languages familiar from the literature. Moreover, the winner determination problem, i.e., the problem of computing an allocation of goods to bidders producing a certain amount of revenue for the auctioneer, can be modelled as the problem of finding a proof for a particular linear logic sequent.


Improving the Johnson-Lindenstrauss Lemma

arXiv.org Machine Learning

The Johnson-Lindenstrauss Lemma allows for the projection of $n$ points in $p-$dimensional Euclidean space onto a $k-$dimensional Euclidean space, with $k \ge \frac{24\ln \emph{n}}{3\epsilon^2-2\epsilon^3}$, so that the pairwise distances are preserved within a factor of $1\pm\epsilon$. Here, working directly with the distributions of the random distances rather than resorting to the moment generating function technique, an improvement on the lower bound for $k$ is obtained. The additional reduction in dimension when compared to bounds found in the literature, is at least $13\%$, and, in some cases, up to $30\%$ additional reduction is achieved. Using the moment generating function technique, we further provide a lower bound for $k$ using pairwise $L_2$ distances in the space of points to be projected and pairwise $L_1$ distances in the space of the projected points. Comparison with the results obtained in the literature shows that the bound presented here provides an additional $36-40\%$ reduction.


A two-step fusion process for multi-criteria decision applied to natural hazards in mountains

arXiv.org Artificial Intelligence

Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources. A methodology mixing the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and information fusion using Belief Function Theory is described. Fuzzy Sets and Possibilities theories allow to transform quantitative and qualitative criteria into a common frame of discernment for decision in Dempster-Shafer Theory (DST ) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic belief assignments elicitation, conflict identification and management, fusion rule choices, results validation but also in specific needs to make a difference between importance and reliability and uncertainty in the fusion process.


ECG Feature Extraction Techniques - A Survey Approach

arXiv.org Artificial Intelligence

ECG Feature Extraction plays a significant role in diagnosing most of the cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T waves. This feature extraction scheme determines the amplitudes and intervals in the ECG signal for subsequent analysis. The amplitudes and intervals value of P-QRS-T segment determines the functioning of heart of every human. Recently, numerous research and techniques have been developed for analyzing the ECG signal. The proposed schemes were mostly based on Fuzzy Logic Methods, Artificial Neural Networks (ANN), Genetic Algorithm (GA), Support Vector Machines (SVM), and other Signal Analysis techniques. All these techniques and algorithms have their advantages and limitations. This proposed paper discusses various techniques and transformations proposed earlier in literature for extracting feature from an ECG signal. In addition this paper also provides a comparative study of various methods proposed by researchers in extracting the feature from ECG signal.


On Building a Knowledge Base for Stability Theory

arXiv.org Artificial Intelligence

A lot of mathematical knowledge has been formalized and stored in repositories by now: different mathematical theorems and theories have been taken into consideration and included in mathematical repositories. Applications more distant from pure mathematics, however --- though based on these theories --- often need more detailed knowledge about the underlying theories. In this paper we present an example Mizar formalization from the area of electrical engineering focusing on stability theory which is based on complex analysis. We discuss what kind of special knowledge is necessary here and which amount of this knowledge is included in existing repositories.


The Production of Probabilistic Entropy in Structure/Action Contingency Relations

arXiv.org Artificial Intelligence

Luhmann (1984) defined society as a communication system which is structurally coupled to, but not an aggregate of, human action systems. The communication system is then considered as self-organizing ("autopoietic"), as are human actors. Communication systems can be studied by using Shannon's (1948) mathematical theory of communication. The update of a network by action at one of the local nodes is then a well-known problem in artificial intelligence (Pearl 1988). By combining these various theories, a general algorithm for probabilistic structure/action contingency can be derived. The consequences of this contingency for each system, its consequences for their further histories, and the stabilization on each side by counterbalancing mechanisms are discussed, in both mathematical and theoretical terms. An empirical example is elaborated.


Active Learning for Hidden Attributes in Networks

arXiv.org Machine Learning

In many networks, vertices have hidden attributes, or types, that are correlated with the networks topology. If the topology is known but these attributes are not, and if learning the attributes is costly, we need a method for choosing which vertex to query in order to learn as much as possible about the attributes of the other vertices. We assume the network is generated by a stochastic block model, but we make no assumptions about its assortativity or disassortativity. We choose which vertex to query using two methods: 1) maximizing the mutual information between its attributes and those of the others (a well-known approach in active learning) and 2) maximizing the average agreement between two independent samples of the conditional Gibbs distribution. Experimental results show that both these methods do much better than simple heuristics. They also consistently identify certain vertices as important by querying them early on.


Informal Concepts in Machines

arXiv.org Artificial Intelligence

This paper constructively proves the existence of an effective procedure generating a computable (total) function that is not contained in any given effectively enumerable set of such functions. The proof implies the existence of machines that process informal concepts such as computable (total) functions beyond the limits of any given Turing machine or formal system, that is, these machines can, in a certain sense, "compute" function values beyond these limits. We call these machines creative. We argue that any "intelligent" machine should be capable of processing informal concepts such as computable (total) functions, that is, it should be creative. Finally, we introduce hypotheses on creative machines which were developed on the basis of theoretical investigations and experiments with computer programs. The hypotheses say that machine intelligence is the execution of a self-developing procedure starting from any universal programming language and any input.


Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data

arXiv.org Artificial Intelligence

One of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number of attributes and have verifiable future performance guarantees. There are few, if any, approaches that successfully address the two goals simultaneously. Performance guarantees become crucial for tasks such as microarray data analysis due to very small sample sizes resulting in limited empirical evaluation. To the best of our knowledge, such algorithms that give theoretical bounds on the future performance have not been proposed so far in the context of the classification of gene expression data. In this work, we investigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam's Razor, Sample Compression, and PAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of well known successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with much smaller number of genes while giving competitive classification accuracy but also have tight risk guarantees on future performance unlike other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application to other domains.


A Unifying View of Multiple Kernel Learning

arXiv.org Machine Learning

Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying general optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.