Not enough data to create a plot.
Try a different view from the menu above.
Country
A Parallel Analog CCD/CMOS Signal Processor
Neugebauer, Charles F., Yariv, Amnon
A CCO based signal processing IC that computes a fully parallel single quadrant vector-matrix multiplication has been designed and fabricated with a 2j..un CCO/CMOS process. The device incorporates an array of Charge Coupled Devices (CCO) which hold an analog matrix of charge encoding the matrix elements. Input vectors are digital with 1 - 8 bit accuracy.
A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm
Berthier, N. E., Singh, S. P., Barto, A. G., Houk, J. C.
A neurophysiologically-based model is presented that controls a simulated kinematic arm during goal-directed reaches. The network generates a quasi-feedforward motor command that is learned using training signals generated by corrective movements. For each target, the network selects and sets the output of a subset of pattern generators. During the movement, feedback from proprioceptors turns off the pattern generators. The task facing individual pattern generators is to recognize when the arm reaches the target and to turn off. A distributed representation of the motor command that resembles population vectors seen in vivo was produced naturally by these simulations.
Generalization Performance in PARSEC - A Structured Connectionist Parsing Architecture
This paper presents PARSECa system for generating connectionist parsing networks from example parses. PARSEC is not based on formal grammar systems and is geared toward spoken language tasks. PARSEC networks exhibit three strengths important for application to speech processing: 1) they learn to parse, and generalize well compared to handcoded grammars; 2) they tolerate several types of noise; 3) they can learn to use multi-modal input. Presented are the PARSEC architecture and performance analyses along several dimensions that demonstrate PARSEC's features. PARSEC's performance is compared to that of traditional grammar-based parsing systems.
Direction Selective Silicon Retina that uses Null Inhibition
Benson, Ronald G., Delbrück, Tobi
Biological retinas extract spatial and temporal features in an attempt to reduce the complexity of performing visual tasks. We have built and tested a silicon retina which encodes several useful temporal features found in vertebrate retinas. The cells in our silicon retina are selective to direction, highly sensitive to positive contrast changes around an ambient light level, and tuned to a particular velocity. Inhibitory connections in the null direction perform the direction selectivity we desire. This silicon retina is on a 4.6 x 6.8mm die and consists of a 47 x 41 array of photoreceptors.
Principles of Risk Minimization for Learning Theory
Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition.
Connectionist Optimisation of Tied Mixture Hidden Markov Models
Renals, Steve, Morgan, Nelson, Bourlard, Hervé, Franco, Horacio, Cohen, Michael
Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basis functions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problems with discriminative training.
Networks for the Separation of Sources that are Superimposed and Delayed
Platt, John C., Faggin, Federico
We have created new networks to unmix signals which have been mixed either with time delays or via filtering. We first show that a subset of the Herault-Jutten learning rules fulfills a principle of minimum output power. We then apply this principle to extensions of the Herault-Jutten network which have delays in the feedback path. Our networks perform well on real speech and music signals that have been mixed using time delays or filtering.