Goto

Collaborating Authors

 Country


Shared Context Probabilistic Transducers

Neural Information Processing Systems

Recently, a model for supervised learning of probabilistic transducers representedby suffix trees was introduced. However, this algorithm tendsto build very large trees, requiring very large amounts of computer memory. In this paper, we propose anew, more compact, transducermodel in which one shares the parameters of distributions associatedto contexts yielding similar conditional output distributions. We illustrate the advantages of the proposed algorithm withcomparative experiments on inducing a noun phrase recogmzer.


Modelling Seasonality and Trends in Daily Rainfall Data

Neural Information Processing Systems

Peter M Williams School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH, UK. email: peterw@cogs.susx.ac.uk Abstract This paper presents a new approach to the problem of modelling daily rainfall using neural networks. We first model the conditional distributions ofrainfall amounts, in such a way that the model itself determines the order of the process, and the time-dependent shape and scale of the conditional distributions. After integrating over particular weather patterns, weare able to extract seasonal variations and long-term trends. 1 Introduction Analysis of rainfall data is important for many agricultural, ecological and engineering activities. Design of irrigation and drainage systems, for instance, needs to take account not only of mean expected rainfall, but also of rainfall volatility. Estimates of crop yields also depend on the distribution of rainfall during the growing season, as well as on the overall amount.


Bayesian Robustification for Audio Visual Fusion

Neural Information Processing Systems

Department of Cognitive Science University of California, San Diego La Jolla, CA 92092-0515 Abstract We discuss the problem of catastrophic fusion in multimodal recognition systems.This problem arises in systems that need to fuse different channels in non-stationary environments. Practice shows that when recognition modules within each modality are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We explore aprincipled solution to this problem based upon Bayesian ideas of competitive models and inference robustification: each sensory channel is provided with simple white-noise context models, andthe perceptual hypothesis and context are jointly estimated. Consequently,context deviations are interpreted as changes in white noise contamination strength, automatically adjusting the influence of the module. The approach is tested on a fixed lexicon automatic audiovisual speech recognition problem with very good results. 1 Introduction In this paper we address the problem of catastrophic fusion in automatic multimodal recognition systems.


Serial Order in Reading Aloud: Connectionist Models and Neighborhood Structure

Neural Information Processing Systems

Besides averaging over the 30 trials per condition, each mean of these charts also averages over the two input distributionconditions and the linear and quadratic function condition, as these four cases are frequently observed violations of the statistical assumptions in nonlinear function approximationwith locally linear models. In Figure Ib the number of factors equals the underlying dimensionality of the problem, and all algorithms are essentially performing equallywell. For perfectly Gaussian distributions in all random variables (not shown separately), LWFA's assumptions are perfectly fulfilled and it achieves the best results, however, almost indistinguishable closely followed by LWPLS. For the ''unequal noise condition", the two PCA based techniques, LWPCA and LWPCR, perform the worst since--as expected-they choose suboptimal projections. However, when violating thestatistical assumptions, LWFA loses parts of its advantages, such that the summary resultsbecome fairly balanced in Figure lb. The quality of function fitting changes significantly when violating the correct number of factors, as illustrated in Figure I a,c.


Multiresolution Tangent Distance for Affine-invariant Classification

Neural Information Processing Systems

The ability to rely on similarity metrics invariant to image transformations isan important issue for image classification tasks such as face or character recognition. We analyze an invariant metric that has performed well for the latter - the tangent distance - and study its limitations when applied to regular images, showing that the most significant among these (convergence to local minima) can be drastically reduced by computing the distance in a multiresolution setting. This leads to the multiresolution tangent distance, which exhibits significantly higher invariance to image transformations,and can be easily combined with robust estimation procedures.


A Generic Framework for Constraint-Directed Search and Scheduling

AI Magazine

This article introduces a generic framework for constraint-directed search. The research literature in constraint-directed scheduling is placed within the framework both to provide insight into, and examples of, the framework and to allow a new perspective on the scheduling literature. We show how a number of algorithms from the constraint-directed scheduling research can be conceptualized within the framework. This conceptualization allows us to identify and compare variations of components of our framework and provides new perspective on open research issues. We discuss the prospects for an overall comparison of scheduling strategies and show that firm conclusions vis-a-vis such a comparison are not supported by the literature. Our principal conclusion is the need for an empirical model of both the characteristics of scheduling problems and the solution techniques themselves. Our framework is offered as a tool for the development of such an understanding of constraint-directed scheduling and, more generally, constraint-directed search.


Naive Physics Perplex

AI Magazine

The "Naive Physics Manifesto" of Pat Hayes (1978) proposes a large-scale project to develop a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work on the representation of commonsense physical knowledge has followed a somewhat different methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each microworld covering a small range of physical phenomena. In this article, I compare the advantages and disadvantages of the two approaches.


Intelligent Data Analysis: Reasoning About Data

AI Magazine

The Second International Symposium on Intelligent Data Analysis (IDA97) was held at Birkbeck College, University of London, on 4 to 6 August 1997. The main theme of IDA97 was to reason about how to analyze data,perhaps as human analysts do, by exploiting many methods from diverse disciplines. This article outlines several key issues and challenges, discusses how they were addressed at the conference, and presents opportunities for further work in the field.


The DARPA High-Performance Knowledge Bases Project

AI Magazine

Now completing its first year, the High-Performance Knowledge Bases Project promotes technology for developing very large, flexible, and reusable knowledge bases. The project is supported by the Defense Advanced Research Projects Agency and includes more than 15 contractors in universities, research laboratories, and companies. The evaluation of the constituent technologies centers on two challenge problems, in crisis management and battlespace reasoning, each demanding powerful problem solving with very large knowledge bases. This article discusses the challenge problems, the constituent technologies, and their integration and evaluation.