Goto

Collaborating Authors

 Country


Connectionist Models for Auditory Scene Analysis

Neural Information Processing Systems

Although the visual and auditory systems share the same basic tasks of informing an organism about its environment, most connectionist workon hearing to date has been devoted to the very different problem of speech recognition. VVe believe that the most fundamental task of the auditory system is the analysis of acoustic signals into components corresponding to individual sound sources, which Bregman has called auditory scene analysis. Computational and connectionist work on auditory scene analysis is reviewed, and the outline of a general model that includes these approaches is described.


Segmental Neural Net Optimization for Continuous Speech Recognition

Neural Information Processing Systems

Previously, we had developed the concept of a Segmental Neural Net (SNN) for phonetic modeling in continuous speech recognition (CSR). This kind of neural networktechnology advanced the state-of-the-art of large-vocabulary CSR, which employs Hidden Marlcov Models (HMM), for the ARPA 1oo0-word Resource Managementcorpus. More Recently, we started porting the neural net system to a larger, more challenging corpus - the ARPA 20,Ooo-word Wall Street Journal (WSJ) corpus. During the porting, we explored the following research directions to refine the system: i) training context-dependent models with a regularization method;ii) training SNN with projection pursuit; and ii) combining different models into a hybrid system. When tested on both a development set and an independent test set, the resulting neural net system alone yielded a perfonnance atthe level of the HMM system, and the hybrid SNN/HMM system achieved a consistent 10-15% word error reduction over the HMM system. This paper describes our hybrid system, with emphasis on the optimization methods employed.


Analysis of Short Term Memories for Neural Networks

Neural Information Processing Systems

Time varying signals, natural or man made, carry information in their time structure. The problem is then one of devising methods and topologies (in the case of interest here, neural topologies) that explore information along time.This problem can be appropriately called temporal pattern recognition, as opposed to the more traditional case of static pattern recognition. In static pattern recognition an input is represented by a point in a space with dimensionality given by the number of signal features, while in temporal pattern recognition the inputs are sequence of features. These sequence of features can also be thought as a point but in a vector space of increasing dimensionality. Fortunately the recent history of the input signal is the one that bears more information to the decision making, so the effective dimensionality is finite but very large and unspecified a priori.



Comparison Training for a Rescheduling Problem in Neural Networks

Neural Information Processing Systems

Many events such as flight delays or the absence of a member require the crew pool rescheduling team to change the initial schedule (rescheduling). In this paper, we show that the neural network comparison paradigm applied to the backgammon game by Tesauro (Tesauro and Sejnowski, 1989) can also be applied to the rescheduling problem of an aircrew pool. Indeed both problems correspond to choosing the best solut.ion


A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Applications

Neural Information Processing Systems

Many well known neural network techniques for adaptive pattern classification and function approximation are inherently highly parallel, and thus have proven difficult to implement for real-time applications at a reasonable cost.


Dual Mechanisms for Neural Binding and Segmentation

Neural Information Processing Systems

We propose that the binding and segmentation of visual features is mediated by two complementary mechanisms; a low resolution, spatial-based, resource-free process and a high resolution, temporal-based, resource-limited process. In the visual cortex, the former depends upon the orderly topographic organization in striate and extrastriate areas while the latter may be related to observed temporal relationships between neuronal activities. Computer simulations illustrate the role the two mechanisms play in figure/ ground discrimination, depth-from-occlusion, and the vividness of perceptual completion.


Bayesian Self-Organization

Neural Information Processing Systems

Recent work by Becker and Hinton (Becker and Hinton, 1992) shows a promising mechanism, based on maximizing mutual information assuming spatial coherence, by which a system can selforganize itself to learn visual abilities such as binocular stereo. We introduce a more general criterion, based on Bayesian probability theory, and thereby demonstrate a connection to Bayesian theories of visual perception and to other organization principles for early vision (Atick and Redlich, 1990). Methods for implementation using variants of stochastic learning are described and, for the special case of linear filtering, we derive an analytic expression for the output. 1 Introduction The input intensity patterns received by the human visual system are typically complicated functions of the object surfaces and light sources in the world. It *Lei Xu was a research scholar in the Division of Applied Sciences at Harvard University while this work was performed. Thus the visual system must be able to extract information from the input intensities that is relatively independent of the actual intensity values.


Neural Network Definitions of Highly Predictable Protein Secondary Structure Classes

Neural Information Processing Systems

We use two co-evolving neural networks to determine new classes of protein secondary structure which are significantly more predictable from local amino sequence than the conventional secondary structure classification. Accurate prediction of the conventional secondary structure classes: alpha helix, beta strand, and coil, from primary sequence has long been an important problem in computational molecular biology. Neural networks have been a popular method to attempt to predict these conventional secondary structure classes. Accuracy has been disappointingly low. The algorithm presented here uses neural networks to similtaneously examine both sequence and structure data, and to evolve new classes of secondary structure that can be predicted from sequence with significantly higher accuracy than the conventional classes. These new classes have both similarities to, and differences with the conventional alpha helix, beta strand and coil.


Structural and Behavioral Evolution of Recurrent Networks

Neural Information Processing Systems

This paper introduces GNARL, an evolutionary program which induces recurrent neural networks that are structurally unconstrained. In contrast to constructive and destructive algorithms, GNARL employs a population of networks and uses a fitness function's unsupervised feedback to guide search through network space. Annealing is used in generating both gaussian weight changes and structural modifications. Applying GNARL to a complex search and collection task demonstrates that the system is capable of inducing networks with complex internal dynamics.