Information Technology


Transient Signal Detection with Neural Networks: The Search for the Desired Signal

Neural Information Processing Systems

Matched filtering has been one of the most powerful techniques employed for transient detection. Here we will show that a dynamic neural network outperforms the conventional approach. When the artificial neural network (ANN) is trained with supervised learning schemes there is a need to supply the desired signal for all time, although we are only interested in detecting the transient. In this paper we also show the effects on the detection agreement of different strategies to construct the desired signal. The extension of the Bayes decision rule (011 desired signal), optimal in static classification, performs worse than desired signals constructed by random noise or prediction during the background. 1 INTRODUCTION Detection of poorly defined waveshapes in a nonstationary high noise background is an important and difficult problem in signal processing.


Modeling Consistency in a Speaker Independent Continuous Speech Recognition System

Neural Information Processing Systems

We would like to incorporate speaker-dependent consistencies, such as gender, in an otherwise speaker-independent speech recognition system. In this paper we discuss a Gender Dependent Neural Network (GDNN) which can be tuned for each gender, while sharing most of the speaker independent parameters. We use a classification network to help generate gender-dependent phonetic probabilities for a statistical (HMM) recognition system.The gender classification net predicts the gender with high accuracy, 98.3% on a Resource Management test set. However, the integration ofthe GDNN into our hybrid HMM-neural network recognizer provided an improvement in the recognition score that is not statistically significant on a Resource Management test set.


A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems

Neural Information Processing Systems

Channel equalization problem is an important problem in high-speed communications. The sequences of symbols transmitted are distorted by neighboring symbols. Traditionally, the channel equalization problem is considered as a channel-inversion operation. One problem of this approach is that there is no direct correspondence between error probability andresidual error produced by the channel inversion operation. In this paper, the optimal equalizer design is formulated as a classification problem. The optimal classifier can be constructed by Bayes decision rule. In general it is nonlinear. An efficient hybrid linear/nonlinear equalizer approach has been proposed to train the equalizer. The error probability of new linear/nonlinear equalizer has been shown to be better thana linear equalizer in an experimental channel. 1 INTRODUCTION



Physiologically Based Speech Synthesis

Neural Information Processing Systems

This study demonstrates a paradigm for modeling speech production basedon neural networks. Using physiological data from speech utterances, a neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior that allows articulator trajectories to be generated from motor commands constrained by phoneme input strings and global performance parameters. From these movement trajectories, a second neuralnetwork generates PARCOR parameters that are then used to synthesize the speech acoustics.



Some Estimates of Necessary Number of Connections and Hidden Units for Feed-Forward Networks

Neural Information Processing Systems

The feed-forward networks with fixed hidden units (FllU-networks) are compared against the category of remaining feed-forward networks withvariable hidden units (VHU-networks).


Learning Cellular Automaton Dynamics with Neural Networks

Neural Information Processing Systems

We have trained networks of E - II units with short-range connections tosimulate simple cellular automata that exhibit complex or chaotic behaviour. Three levels of learning are possible (in decreasing orderof difficulty): learning the underlying automaton rule, learning asymptotic dynamical behaviour, and learning to extrapolate thetraining history. The levels of learning achieved with and without weight sharing for different automata provide new insight into their dynamics.



The Power of Approximating: a Comparison of Activation Functions

Neural Information Processing Systems

We compare activation functions in terms of the approximation power of their feedforward nets. We consider the case of analog as well as boolean input. 1 Introduction