Country
Developing Population Codes by Minimizing Description Length
Zemel, Richard S., Hinton, Geoffrey E.
The Minimum Description Length principle (MDL) can be used to train the hidden units of a neural network to extract a representation thatis cheap to describe but nonetheless allows the input to be reconstructed accurately. We show how MDL can be used to develop highly redundant population codes. Each hidden unit has a location in a low-dimensional implicit space. If the hidden unit activities form a bump of a standard shape in this space, they can be cheaply encoded by the center ofthis bump. So the weights from the input units to the hidden units in an autoencoder are trained to make the activities form a standard bump.
Inverse Dynamics of Speech Motor Control
Hirayama, Makoto, Vatikiotis-Bateson, Eric, Kawato, Mitsuo
This inverse dynamics model allows the use of a faster speech mot.or control scheme, which can be applied to phoneme-tospeech synthesisvia musclo-skeletal system dynamics, or to future use in speech recognition. The forward acoustic model, which is the mapping from articulator trajectories t.o the acoustic parameters, was improved by adding velocity and voicing information inputs to distinguish acollst.ic
Clustering with a Domain-Specific Distance Measure
Gold, Steven, Mjolsness, Eric, Rangarajan, Anand
Critical features of a domain (such as invariance under translation, rotation, and permu- Clustering with a Domain-Specific Distance Measure 103 tation) are captured within the clustering procedure, rather than reflected in the properties of feature sets created prior to clustering. The distance measure and learning problem are formally described as nested objective functions. We derive an efficient algorithm by using optimization techniques that allow us to divide up the objective function into parts which may be minimized in distinct phases. The algorithm has accurately recreated 10 prototypes from a randomly generated sample database of 100 images consisting of 20 points each in 120 experiments. Finally, by incorporating permutation invariance in our distance measure, we have a technique that we may be able to apply to the clustering of graphs. Our goal is to develop measures which will enable the learning of objects with shape or structure. Acknowledgements This work has been supported by AFOSR grant F49620-92-J-0465 and ONR/DARPA grant N00014-92-J-4048.
Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Network
Baird, Bill, Troyer, Todd, Eeckman, Frank
We show how an "Elman" network architecture, constructed from recurrently connected oscillatory associative memory network modules, canemploy selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. Previously we have shown how the discrete time "Elman" network algorithm can be implemented in a network completely described by continuous ordinary differential equations. The time steps (machine cycles)of the system are implemented by rhythmic variation (clocking) of a bifurcation parameter. In this architecture, oscillation amplitudecodes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitudeinformation; the activity of non-resonating modules contributes incoherent crosstalk noise. Attentional control is modeled as a special subset of the hidden modules with ouputs which affect the resonant frequencies of other hidden modules. They control synchrony among the other modules anddirect the flow of computation (attention) to effect transitions betweentwo subgraphs of a thirteen state automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton.
How to Choose an Activation Function
Mhaskar, H. N., Micchelli, C. A..
We study the complexity problem in artificial feedforward neural networks designed to approximate real valued functions of several real variables; i.e., we estimate the number of neurons in a network required to ensure a given degree of approximation to every function in a given function class. We indicate how to construct networks with the indicated number of neurons evaluating standard activation functions. Our general theorem shows that the smoother the activation function, the better the rate of approximation. 1 INTRODUCTION The approximation capabilities of feedforward neural networks with a single hidden layer has been studied by many authors, e.g., [1, 2, 5]. In [10], we have shown that such a network using practically any nonlinear activation function can approximate any continuous function of any number of real variables on any compact set to any desired degree of accuracy. A central question in this theory is the following.
Exploiting Chaos to Control the Future
Flake, Gary W., Sun, Guo-Zhen, Lee, Yee-Chun
Recently, Ott, Grebogi and Yorke (OGY) [6] found an effective method to control chaotic systems to unstable fixed points by using onlysmall control forces; however, OGY's method is based on and limited to a linear theory and requires considerable knowledge of the dynamics of the system to be controlled. In this paper we use two radial basis function networks: one as a model of an unknown plant and the other as the controller. The controller is trained with a recurrent learning algorithm to minimize a novel objective function such that the controller can locate an unstable fixed point and drive the system into the fixed point with no a priori knowledge ofthe system dynamics. Our results indicate that the neural controller offers many advantages over OGY's technique.
Lipreading by neural networks: Visual preprocessing, learning, and sensory integration
Wolff, Gregory J., Prasad, K. Venkatesh, Stork, David G., Hennecke, Marcus
Automated speech recognition is notoriously hard, and thus any predictive source of information and constraints that could be incorporated into a computer speech recognition system would be desirable. Humans, especially the hearing impaired, can utilize visual information - "speech reading" - for improved accuracy (Dodd & Campbell, 1987, Sanders & Goodrich, 1971). Speech reading can provide direct information about segments, phonemes, rate, speaker gender and identity, and subtle informationfor segmenting speech from background noise or multiple speakers (De Filippo & Sims, 1988, Green & Miller, 1985). Fundamental support for the use of visual information comes from the complementary natureof the visual and acoustic speech signals. Utterances that are difficult to distinguish acoustically are the easiest to distinguish.
Functional Models of Selective Attention and Context Dependency
Scope This workshop reviewed and classified the various models which have emerged from the general concept of selective attention and context dependency, and sought to identify their commonalities. It was concluded that the motivation and mechanism ofthese functional models are "efficiency" and ''factoring'', respectively. The workshop focused on computational models of selective attention and context dependency withinthe realm of neural networks. We treated only ''functional'' models; computational models of biological neural systems, and symbolic or rule-based systems were omitted from the discussion. Presentations Thomas H. Hildebrandt presented the results of his recent survey of the literature onfunctional models of selective attention and context dependency.