Not enough data to create a plot.
Try a different view from the menu above.
Networks with Learned Unit Response Functions
Feedforward networks composed of units which compute a sigmoidal function of a weighted sum of their inputs have been much investigated. We tested the approximation and estimation capabilities of networks using functions more complex than sigmoids. Three classes of functions were tested: polynomials, rational functions, and flexible Fourier series. Unlike sigmoids, these classes can fit nonmonotonic functions. They were compared on three problems: prediction of Boston housing prices, the sunspot count, and robot arm inverse dynamics. The complex units attained clearly superior performance on the robot arm problem, which is a highly nonmonotonic, pure approximation problem. On the noisy and only mildly nonlinear Boston housing and sunspot problems, differences among the complex units were revealed; polynomials did poorly, whereas rationals and flexible Fourier series were comparable to sigmoids. 1 Introduction
Direction Selective Silicon Retina that uses Null Inhibition
Benson, Ronald G., Delbrรผck, Tobi
Biological retinas extract spatial and temporal features in an attempt to reduce the complexity of performing visual tasks. We have built and tested a silicon retina which encodes several useful temporal features found in vertebrate retinas. The cells in our silicon retina are selective to direction, highly sensitive to positive contrast changes around an ambient light level, and tuned to a particular velocity. Inhibitory connections in the null direction perform the direction selectivity we desire. This silicon retina is on a 4.6 x 6.8mm die and consists of a 47 x 41 array of photoreceptors.
Principles of Risk Minimization for Learning Theory
Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition.
Connectionist Optimisation of Tied Mixture Hidden Markov Models
Renals, Steve, Morgan, Nelson, Bourlard, Hervรฉ, Franco, Horacio, Cohen, Michael
Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basis functions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problems with discriminative training.
Networks for the Separation of Sources that are Superimposed and Delayed
Platt, John C., Faggin, Federico
We have created new networks to unmix signals which have been mixed either with time delays or via filtering. We first show that a subset of the Herault-Jutten learning rules fulfills a principle of minimum output power. We then apply this principle to extensions of the Herault-Jutten network which have delays in the feedback path. Our networks perform well on real speech and music signals that have been mixed using time delays or filtering.
A Computational Mechanism to Account for Averaged Modified Hand Trajectories
Using the double-step target displacement paradigm the mechanisms underlying arm trajectory modification were investigated. Using short (10-110 msec) inter-stimulus intervals the resulting hand motions were initially directed in between the first and second target locations. The kinematic features of the modified motions were accounted for by the superposition scheme, which involves the vectorial addition of two independent point-topoint motion units: one for moving the hand toward an internally specified location and a second one for moving between that location and the final target location. The similarity between the inferred internally specified locations and previously reported measured endpoints of the first saccades in double-step eye-movement studies may suggest similarities between perceived target locations in eye and hand motor control.
Threshold Network Learning in the Presence of Equivalences
This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output feedforward threshold networks in which the weights conform to certain equivalences. It is shown that the sample size for reliable learning can be bounded above by a formula similar to that required for single output networks with no equivalences. The best previously obtained bounds are improved for all cases.
Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks
Singer, Elliot, Lippmann, Richard P.
The RBF network consists of an input layer, a hidden layer composed of Gaussian basis functions, and an output layer. Connections from the input layer to the hidden layer are fixed at unity while those from the hidden layer to the output layer are trained by minimizing the overall mean-square error between actual and desired output values. Each RBF output node has a corresponding state in a set of HMM word models which represent the words in the vocabulary. HMM word models are left-to-right with no skip states and have a one-state background noise model at either end. The background noise models are identical for all words.