Country
Bayesian Robustification for Audio Visual Fusion
Movellan, Javier R., Mineiro, Paul
Department of Cognitive Science University of California, San Diego La Jolla, CA 92092-0515 Abstract We discuss the problem of catastrophic fusion in multimodal recognition systems.This problem arises in systems that need to fuse different channels in non-stationary environments. Practice shows that when recognition modules within each modality are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We explore aprincipled solution to this problem based upon Bayesian ideas of competitive models and inference robustification: each sensory channel is provided with simple white-noise context models, andthe perceptual hypothesis and context are jointly estimated. Consequently,context deviations are interpreted as changes in white noise contamination strength, automatically adjusting the influence of the module. The approach is tested on a fixed lexicon automatic audiovisual speech recognition problem with very good results. 1 Introduction In this paper we address the problem of catastrophic fusion in automatic multimodal recognition systems.
Serial Order in Reading Aloud: Connectionist Models and Neighborhood Structure
Milostan, Jeanne C., Cottrell, Garrison W.
Besides averaging over the 30 trials per condition, each mean of these charts also averages over the two input distributionconditions and the linear and quadratic function condition, as these four cases are frequently observed violations of the statistical assumptions in nonlinear function approximationwith locally linear models. In Figure Ib the number of factors equals the underlying dimensionality of the problem, and all algorithms are essentially performing equallywell. For perfectly Gaussian distributions in all random variables (not shown separately), LWFA's assumptions are perfectly fulfilled and it achieves the best results, however, almost indistinguishable closely followed by LWPLS. For the ''unequal noise condition", the two PCA based techniques, LWPCA and LWPCR, perform the worst since--as expected-they choose suboptimal projections. However, when violating thestatistical assumptions, LWFA loses parts of its advantages, such that the summary resultsbecome fairly balanced in Figure lb. The quality of function fitting changes significantly when violating the correct number of factors, as illustrated in Figure I a,c.
Multiresolution Tangent Distance for Affine-invariant Classification
Vasconcelos, Nuno, Lippman, Andrew
The ability to rely on similarity metrics invariant to image transformations isan important issue for image classification tasks such as face or character recognition. We analyze an invariant metric that has performed well for the latter - the tangent distance - and study its limitations when applied to regular images, showing that the most significant among these (convergence to local minima) can be drastically reduced by computing the distance in a multiresolution setting. This leads to the multiresolution tangent distance, which exhibits significantly higher invariance to image transformations,and can be easily combined with robust estimation procedures.
A Generic Framework for Constraint-Directed Search and Scheduling
Beck, J. Christopher, Fox, Mark S.
This article introduces a generic framework for constraint-directed search. The research literature in constraint-directed scheduling is placed within the framework both to provide insight into, and examples of, the framework and to allow a new perspective on the scheduling literature. We show how a number of algorithms from the constraint-directed scheduling research can be conceptualized within the framework. This conceptualization allows us to identify and compare variations of components of our framework and provides new perspective on open research issues. We discuss the prospects for an overall comparison of scheduling strategies and show that firm conclusions vis-a-vis such a comparison are not supported by the literature. Our principal conclusion is the need for an empirical model of both the characteristics of scheduling problems and the solution techniques themselves. Our framework is offered as a tool for the development of such an understanding of constraint-directed scheduling and, more generally, constraint-directed search.
Naive Physics Perplex
The "Naive Physics Manifesto" of Pat Hayes (1978) proposes a large-scale project to develop a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work on the representation of commonsense physical knowledge has followed a somewhat different methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each microworld covering a small range of physical phenomena. In this article, I compare the advantages and disadvantages of the two approaches.
Intelligent Data Analysis: Reasoning About Data
Berthold, Michael, Cohen, Paul R., Liu, Xiaohui
The Second International Symposium on Intelligent Data Analysis (IDA97) was held at Birkbeck College, University of London, on 4 to 6 August 1997. The main theme of IDA97 was to reason about how to analyze data,perhaps as human analysts do, by exploiting many methods from diverse disciplines. This article outlines several key issues and challenges, discusses how they were addressed at the conference, and presents opportunities for further work in the field.
The DARPA High-Performance Knowledge Bases Project
Cohen, Paul R., Schrag, Robert, Jones, Eric, Pease, Adam, Lin, Albert, Starr, Barbara, Gunning, David, Burke, Murray
Now completing its first year, the High-Performance Knowledge Bases Project promotes technology for developing very large, flexible, and reusable knowledge bases. The project is supported by the Defense Advanced Research Projects Agency and includes more than 15 contractors in universities, research laboratories, and companies. The evaluation of the constituent technologies centers on two challenge problems, in crisis management and battlespace reasoning, each demanding powerful problem solving with very large knowledge bases. This article discusses the challenge problems, the constituent technologies, and their integration and evaluation.
Building of a Corporate Memory for Traffic-Accident Analysis
Dieng, Rose, Giboin, Alain, Amerge, Christelle, Corby, Olivier, Despres, Sylvie, Alpay, Laurence, Labidi, Sofiane, Lapalut, Stephane
This article presents an experiment of expertise capitalization in road traffic-accident analysis. We study the integration of models of expertise from different members of an organization into a coherent corporate expertise model. We present our elicitation protocol and the generic models and tools we exploited for knowledge modeling in this context of multiple experts. We compare the knowledge models obtained for seven experts in accidentology and their representation through conceptual graphs. Finally, we discuss the results of our experiment from a knowledge capitalization viewpoint.