Goto

Collaborating Authors

 Technology


A Comprehensive Trainable Error Model for Sung Music Queries

Journal of Artificial Intelligence Research

We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of `query-by-humming' (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of {m error} or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications.


Universal Convergence of Semimeasures on Individual Random Sequences

arXiv.org Artificial Intelligence

Solomonoff's central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown mu. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Loef) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to mu on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role.


On the Convergence Speed of MDL Predictions for Bernoulli Sequences

arXiv.org Artificial Intelligence

We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it additionally specifies a `rate of convergence'. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. The results apply to many Machine Learning tasks including classification and hypothesis testing. We provide arguments that our theorems generalize to countable classes of i.i.d. models.


On the Complexity of Case-Based Planning

arXiv.org Artificial Intelligence

Case-based reasoning [23, 1, 32] is a problem solving methodology based on using a library of solutions for similar problems, i.e., a library of "cases" with their respective solutions. Roughly speaking, case-based planning consists into storing generated plans and using them for finding new plans [15, 8, 29]. In practice, what is stored is not only a specific problem with a specific solution, but also some additional information that is considered useful to the aim of solving new problems, e.g., information about how the plan has been derived [30], why it works [20, 16], when it would not work [17], etc. Different case-based planners differ on how they store cases, which cases they retrieve when the solution of a new problem is needed, how they adapt a solution to a new problem, whether they use one or more cases for building a


Ordinal and Probabilistic Representations of Acceptance

Journal of Artificial Intelligence Research

An accepted belief is a proposition considered likely enough by an agent, to be inferred from as if it were true. This paper bridges the gap between probabilistic and logical representations of accepted beliefs. To this end, natural properties of relations on propositions, describing relative strength of belief are augmented with some conditions ensuring that accepted beliefs form a deductively closed set. This requirement turns out to be very restrictive. In particular, it is shown that the sets of accepted belief of an agent can always be derived from a family of possibility rankings of states. An agent accepts a proposition in a given context if this proposition is considered more possible than its negation in this context, for all possibility rankings in the family. These results are closely connected to the non-monotonic 'preferential' inference system of Kraus, Lehmann and Magidor and the so-called plausibility functions of Friedman and Halpern. The extent to which probability theory is compatible with acceptance relations is laid bare. A solution to the lottery paradox, which is considered as a major impediment to the use of non-monotonic inference is proposed using a special kind of probabilities (called lexicographic, or big-stepped). The setting of acceptance relations also proposes another way of approaching the theory of belief change after the works of Gรƒร‚ยคrdenfors and colleagues. Our view considers the acceptance relation as a primitive object from which belief sets are derived in various contexts.


A Maximal Tractable Class of Soft Constraints

Journal of Artificial Intelligence Research

Many researchers in artificial intelligence are beginning to explore the use of soft constraints to express a set of (possibly conflicting) problem requirements. A soft constraint is a function defined on a collection of variables which associates some measure of desirability with each possible combination of values for those variables. However, the crucial question of the computational complexity of finding the optimal solution to a collection of soft constraints has so far received very little attention. In this paper we identify a class of soft binary constraints for which the problem of finding the optimal solution is tractable. In other words, we show that for any given set of such constraints, there exists a polynomial time algorithm to determine the assignment having the best overall combined measure of desirability. This tractable class includes many commonly-occurring soft constraints, such as 'as near as possible' or 'as soon as possible after', as well as crisp constraints such as 'greater than'. Finally, we show that this tractable class is maximal, in the sense that adding any other form of soft binary constraint which is not in the class gives rise to a class of problems which is NP-hard.


Semiclassical Neural Network

arXiv.org Artificial Intelligence

We have constructed a simple semiclassical model of neural network where neurons have quantum links with one another in a chosen way and affect one another in a fashion analogous to action potentials. We have examined the role of stochasticity introduced by the quantum potential and compare the system with the classical system of an integrate-and-fire model by Hopfield. Average periodicity and short term retentivity of input memory are noted.



The 2003 International Conference on Automated Planning and Scheduling (ICAPS-03)

AI Magazine

The 2003International Conference on Automated Planning and Scheduling (ICAPS-03) was held 9 to 13 June 2003 in Trento, Italy. It was chaired by Enrico Giunchiglia (University of Genova), Nicola Muscettola (NASA Ames), and Dana Nau (University of Maryland). Piergiorgio Bertoli and Marco Benedetti (both from ITC-IRST) were the local chair and the workshop-tutorial coordination chair, respectively.


Incremental Heuristic Search in AI

AI Magazine

Incremental search reuses information from previous searches to find solutions to a series of similar search problems potentially faster than is possible by solving each search problem from scratch. This is important because many AI systems have to adapt their plans continuously to changes in (their knowledge of) the world. In this article, we give an overview of incremental search, focusing on LIFELONG PLANNING A*, and outline some of its possible applications in AI.