Learning Graphical Models
Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks
Singer, Elliot, Lippmann, Richard P.
The RBF network consists of an input layer, a hidden layer composed of Gaussian basis functions, and an output layer. Connections from the input layer to the hidden layer are fixed at unity while those from the hidden layer to the output layer are trained by minimizing the overall mean-square error between actual and desired output values. Each RBF output node has a corresponding state in a set of HMM word models which represent the words in the vocabulary. HMM word models are left-to-right with no skip states and have a one-state background noise model at either end. The background noise models are identical for all words.
Time-Warping Network: A Hybrid Framework for Speech Recognition
Levin, Esther, Pieraccini, Roberto, Bocchieri, Enrico
Such systems attempt to combine the best features of both models: the temporal structure of HMMs and the discriminative power of neural networks. In this work we define a time-warping (1W) neuron that extends the operation of the fonnal neuron of a back-propagation network by warping the input pattern to match it optimally to its weights. We show that a single-layer network of TW neurons is equivalent to a Gaussian density HMMbased recognitionsystem.
Neural Control for Rolling Mills: Incorporating Domain Theories to Overcome Data Deficiency
Röscheisen, Martin, Hofmann, Reimar, Tresp, Volker
In a Bayesian framework, we give a principled account of how domainspecific prior knowledge such as imperfect analytic domain theories can be optimally incorporated into networks of locally-tuned units: by choosing a specific architecture and by applying a specific training regimen. Our method proved successful in overcoming the data deficiency problem in a large-scale application to devise a neural control for a hot line rolling mill. It achieves in this application significantly higher accuracy than optimally-tuned standard algorithms such as sigmoidal backpropagation, and outperforms the state-of-the-art solution.
Connectionist Optimisation of Tied Mixture Hidden Markov Models
Renals, Steve, Morgan, Nelson, Bourlard, Hervé, Franco, Horacio, Cohen, Michael
Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basis functions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problems with discriminative training.
Neural Control for Rolling Mills: Incorporating Domain Theories to Overcome Data Deficiency
Röscheisen, Martin, Hofmann, Reimar, Tresp, Volker
In a Bayesian framework, we give a principled account of how domainspecific prior knowledge such as imperfect analytic domain theories can be optimally incorporated into networks of locally-tuned units: by choosing a specific architecture and by applying a specific training regimen. Our method proved successful in overcoming the data deficiency problem in a large-scale application to devise a neural control for a hot line rolling mill. It achieves in this application significantly higher accuracy than optimally-tuned standard algorithms such as sigmoidal backpropagation, and outperforms the state-of-the-art solution.
Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models
Smyth, Padhraic, Mellstrom, Jeff
We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind - in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration (NASA)) is unique in terms of providing end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system. The ground component of the DSN consists of three ground station complexes located in California, Spain and Australia, giving full 24-hour coverage for deep space communications.
Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models
Smyth, Padhraic, Mellstrom, Jeff
Padhraic Smyth, J eft" Mellstrom Jet Propulsion Laboratory 238-420 California Institute of Technology Pasadena, CA 91109 Abstract We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind - in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL)for the National Aeronautics and Space Administration (NASA)) is unique in terms of ...
Bayesian Model Comparison and Backprop Nets
The Bayesian model comparison framework is reviewed, and the Bayesian Occam's razor is explained. This framework can be applied to feedforward networks, making possible (1) objective comparisons between solutions using alternative network architectures; (2) objective choice of magnitude and type of weight decay terms; (3) quantified estimates of the error bars on network parameters and on network output. The framework also generates ameasure of the effective number of parameters determined by the data. The relationship of Bayesian model comparison to recent work on prediction ofgeneralisation ability (Guyon et al., 1992, Moody, 1992) is discussed.