Not enough data to create a plot.
Try a different view from the menu above.
Country
Nonlinear Image Interpolation using Manifold Learning
Bregler, Christoph, Omohundro, Stephen M.
The problem of interpolating between specified images in an image sequence is a simple, but important task in model-based vision. We describe an approach based on the abstract task of "manifold learning" and present results on both synthetic and real image sequences. This problem arose in the development of a combined lipreading and speech recognition system.
Temporal Dynamics of Generalization in Neural Networks
Wang, Changfeng, Venkatesh, Santosh S.
This paper presents a rigorous characterization of how a general nonlinear learning machine generalizes during the training process when it is trained on a random sample using a gradient descent algorithm based on reduction of training error. It is shown, in particular, that best generalization performance occurs, in general, before the global minimum of the training error is achieved. The different roles played by the complexity of the machine class and the complexity of the specific machine in the class during learning are also precisely demarcated. 1 INTRODUCTION In learning machines such as neural networks, two major factors that affect the'goodness of fit' of the examples are network size (complexity) and training time. These are also the major factors that affect the generalization performance of the network. Many theoretical studies exploring the relation between generalization performance and machine complexity support the parsimony heuristics suggested by Occam's razor, to wit that amongst machines with similar training performance one should opt for the machine of least complexity.
Nonlinear Image Interpolation using Manifold Learning
Bregler, Christoph, Omohundro, Stephen M.
The problem of interpolating between specified images in an image but important task in model-based vision.sequence is a simple, We describe an approach based on the abstract task of "manifold learning" and present results on both synthetic and real image sequences. This problem arose in the development of a combined lipreading and speech recognition system.
On the Computational Utility of Consciousness
Mathis, Donald W., Mozer, Michael C.
We propose a computational framework for understanding and modeling human consciousness. This framework integrates many existing theoretical perspectives, yet is sufficiently concrete to allow simulation experiments. We do not attempt to explain qualia (subjective experience),but instead ask what differences exist within the cognitive information processing system when a person is conscious ofmentally-represented information versus when that information isunconscious. The central idea we explore is that the contents of consciousness correspond to temporally persistent states in a network of computational modules. Three simulations are described illustratingthat the behavior of persistent states in the models corresponds roughly to the behavior of conscious states people experience when performing similar tasks. Our simulations show that periodic settling to persistent (i.e., conscious) states improves performanceby cleaning up inaccuracies and noise, forcing decisions, and helping keep the system on track toward a solution.
Real-Time Control of a Tokamak Plasma Using Neural Networks
Bishop, Chris M., Haynes, Paul S., Smith, Mike E U, Todd, Tom N., Trotman, David L., Windsor, Colin G.
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a tokamak fusion experiment. The tokamak is currently the principal experimentaldevice for research into the magnetic confinement approachto controlled fusion. In the tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a timescale of a few tens of microseconds. Softwaresimulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multilayer perceptron, using a hybrid of digital and analogue technology, has been developed.
Dynamic Modelling of Chaotic Time Series with Neural Networks
Principe, Jose C., Kuo, Jyh-Ming
In young barn owls raised with optical prisms over their eyes, these auditory maps are shifted to stay in register with the visual map, suggesting that the visual input imposes a frame of reference on the auditory maps. However, the optic tectum, the first site of convergence of visual with auditory information, is not the site of plasticity for the shift of the auditory maps; the plasticity occurs instead in the inferior colliculus, which contains an auditory map and projects into the optic tectum. We explored a model of the owl remapping in which a global reinforcement signal whose delivery is controlled by visual foveation. A hebb learning rule gated by reinforcement learnedto appropriately adjust auditory maps. In addition, reinforcement learning preferentially adjusted the weights in the inferior colliculus, as in the owl brain, even though the weights were allowed to change throughout the auditory system. This observation raisesthe possibility that the site of learning does not have to be genetically specified, but could be determined by how the learning procedure interacts with the network architecture.
A Lagrangian Formulation For Optical Backpropagation Training In Kerr-Type Optical Networks
Steck, James Edward, Skinner, Steven R., Cruz-Cabrara, Alvaro A., Behrman, Elizabeth C.
Behrman Physics Department Wichita State University Wichita, KS 67260-0032 Abstract A training method based on a form of continuous spatially distributed optical error back-propagation is presented for an all optical network composed of nondiscrete neurons and weighted interconnections. The all optical network is feed-forward and is composed of thin layers of a Kerrtype selffocusing/defocusing nonlinear optical material. The training method is derived from a Lagrangian formulation of the constrained minimization of the network error at the output. This leads to a formulation that describes training as a calculation of the distributed error of the optical signal at the output which is then reflected back through the device to assign a spatially distributed error to the internal layers. This error is then used to modify the internal weighting values.