Goto

Collaborating Authors

 Country


Parallel analog VLSI architectures for computation of heading direction and time-to-contact

Neural Information Processing Systems

To exploit their properties at a system level, we developed parallel image processing architectures forapplications that rely mostly on the qualitative properties of the optical flow, rather than on the precise values of the velocity vectors. Specifically, we designed twoparallel architectures that employ arrays of elementary motion sensors for the computation of heading direction and time-to-contact. The application domain thatwe took into consideration for the implementation of such architectures, is the promising one of vehicle navigation. Having defined the types of images to be analyzed and the types of processing to perform, we were able to use a priori infor- VLSI Architectures for Computation of Heading Direction and Time-to-contact 721 mation to integrate selectively the sparse data obtained from the velocity sensors and determine the qualitative properties of the optical flow field of interest.


A Novel Channel Selection System in Cochlear Implants Using Artificial Neural Network

Neural Information Processing Systems

A cochlear implant is a device used to provide the sensation of sound to those who are profoundly deaf by means of electrical stimulation of residual auditory neurons. It generally consists of a directional microphone, a wearable speech processor, a headset transmitter and an implanted receiver-stimulator module with an electrode A Novel Channel Selection System in Cochlear Implants 911 array which all together provide an electrical representation of the speech signal to the residual nerve fibres of the peripheral auditory system (Clark et ai, 1990).



Constructive Algorithms for Hierarchical Mixtures of Experts

Neural Information Processing Systems

By applying a likelihood splitting criteria to each expert in the HME we "grow" the tree adaptively during training. Secondly,by considering only the most probable path through the tree we may "prune" branches away, either temporarily, or permanently ifthey become redundant. We demonstrate results for the growing and path pruning algorithms which show significant speed ups and more efficient use of parameters over the standard fixed structure in discriminating between two interlocking spirals and classifying 8-bit parity patterns. INTRODUCTION The HME (Jordan & Jacobs 1994) is a tree structured network whose terminal nodes are simple function approximators in the case of regression or classifiers in the case of classification. The outputs of the terminal nodes or experts are recursively combined upwards towards the root node, to form the overall output of the network, by "gates" which are situated at the non-terminal nodes.


Universal Approximation and Learning of Trajectories Using Oscillators

Neural Information Processing Systems

Natural and artificial neural circuits must be capable of traversing specificstate space trajectories. A natural approach to this problem is to learn the relevant trajectories from examples. Unfortunately, gradientdescent learning of complex trajectories in amorphous networks is unsuccessful. We suggest a possible approach wheretrajectories are realized by combining simple oscillators, in various modular ways. We contrast two regimes of fast and slow oscillations.


Quadratic-Type Lyapunov Functions for Competitive Neural Networks with Different Time-Scales

Neural Information Processing Systems

Anke Meyer-Base Institute of Technical Informatics Technical University of Darmstadt Darmstadt, Germany 64283 Abstract The dynamics of complex neural networks modelling the selforganization processin cortical maps must include the aspects of long and short-term memory. The behaviour of the network is such characterized by an equation of neural activity as a fast phenomenon andan equation of synaptic modification as a slow part of the neural system. We present a quadratic-type Lyapunov function for the flow of a competitive neural system with fast and slow dynamic variables. We also show the consequences of the stability analysis on the neural net parameters. 1 INTRODUCTION This paper investigates a special class of laterally inhibited neural networks. In particular, we have examined the dynamics of a restricted class of laterally inhibited neural networks from a rigorous analytic standpoint.


Learning Model Bias

Neural Information Processing Systems

In this paper the problem of learning appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a theorem is given bounding the number tasks that must be learnt.


Investment Learning with Hierarchical PSOMs

Neural Information Processing Systems

We propose a hierarchical scheme for rapid learning of context dependent "skills" that is based on the recently introduced "Parameterized Self Organizing Map" ("PSOM"). The underlying idea is to first invest some learning effort to specialize the system into a rapid learner for a more restricted range of contexts. The specialization is carried out by a prior "investment learning stage", during which the system acquires a set of basis mappings or "skills" for a set of prototypical contexts. Adaptation of a "skill" to a new context can then be achieved by interpolating in the space of the basis mappings and thus can be extremely rapid. We demonstrate the potential of this approach for the task of a 3D visuomotor mapfor a Puma robot and two cameras. This includes the forward and backward robot kinematics in 3D end effector coordinates, the 2D 2D retina coordinates and also the 6D joint angles. After the investment phasethe transformation can be learned for a new camera setup with a single observation.


Prediction of Beta Sheets in Proteins

Neural Information Processing Systems

Most current methods for prediction of protein secondary structure use a small window of the protein sequence to predict the structure of the central amino acid. We describe a new method for prediction of the non-local structure called,8-sheet, which consists of two or more,8-strands that are connected by hydrogen bonds. Since,8-strands are often widely separated in the protein chain, a network with two windows is introduced. After training on a set of proteins the network predicts the sheets well, but there are many false positives. Byusing a global energy function the,8-sheet prediction is combined with a local prediction of the three secondary structures a-helix,,8-strand and coil.


Stable Dynamic Parameter Adaption

Neural Information Processing Systems

A stability criterion for dynamic parameter adaptation is given. In the case of the learning rate of backpropagation, a class of stable algorithms is presented and studied, including a convergence proof. 1 INTRODUCTION All but a few learning algorithms employ one or more parameters that control the quality of learning. Backpropagation has its learning rate and momentum parameter; Boltzmannlearning uses a simulated annealing schedule; Kohonen learning a learning rate and a decay parameter; genetic algorithms probabilities, etc. The investigator always has to set the parameters to specific values when trying to solve a certain problem. Traditionally, the metaproblem of adjusting the parameters is solved by relying on a set of well-tested values of other problems or an intensive search for good parameter regions by restarting the experiment with different values. Inthis situation, a great deal of expertise and/or time for experiment design is required (as well as a huge amount of computing time).