Country
Plasticity of Center-Surround Opponent Receptive Fields in Real and Artificial Neural Systems of Vision
Yasui, S., Furukawa, T., Yamada, M., Saito, T.
The center-surround opponent receptive field(CSRF) mechanism represents one such example. Here, analogous CSRFs are shown to be formed in an artificial neural network which learns to localize contours (edges) of the luminance difference. Furthermore, when the input pattern is corrupted by a background noise, the CSRFs of the hidden units becomes shallower andbroader with decrease of the signal-to-noise ratio (SNR). The same kind of SNR-dependent plasticity is present in the CSRF of real visual neurons; in bipolar cells of the carp retina as is shown here experimentally, as well as in large monopolar cells of the fly compound eye as was described by others. Also, analogous SNRdependent plasticityis shown to be present in the biphasic flash responses (BPFR) of these artificial and biological visual systems. Thus, the spatial (CSRF) and temporal (BPFR) filtering properties withwhich a wide variety of creatures see the world appear to be optimized for detectability of changes in space and time. 1 INTRODUCTION A number of learning algorithms have been developed to make synthetic neural machines be trainable to function in certain optimal ways. If the brain and nervous systems that we see in nature are best answers of the evolutionary process, then one might be able to find some common'softwares' in real and artificial neural systems. This possibility is examined in this paper, with respect to a basic visual 160 S.YASUI, T. FURUKAWA, M. YAMADA, T. SAITO
A Dynamical Systems Approach for a Learnable Autonomous Robot
This paper discusses how a robot can learn goal-directed navigation tasksusing local sensory inputs. The emphasis is that such learning tasks could be formulated as an embedding problem of dynamical systems: desired trajectories in a task space should be embedded into an adequate sensory-based internal state space so that an unique mapping from the internal state space to the motor command could be established. The paper shows that a recurrent neural network suffices in self-organizing such an adequate internal state space from the temporal sensory input.
Examples of learning curves from a modified VC-formalism
Kowalczyk, Adam, Szymanski, Jacek, Bartlett, Peter L., Williamson, Robert C.
We examine the issue of evaluation of model specific parameters in a modified VC-formalism. Two examples are analyzed: the 2-dimensional homogeneous perceptron and the I-dimensional higher order neuron. Both models are solved theoretically, and their learning curves are compared againsttrue learning curves. It is shown that the formalism has the potential to generate a variety of learning curves, including ones displaying ''phase transitions."
Visual gesture-based robot guidance with a modular neural system
Littmann, Enno, Drees, Andrea, Ritter, Helge
We report on the development of the modular neural system "SEE EAGLE" for the visual guidance of robot pick-and-place actions. Several neural networks are integrated to a single system that visually recognizeshuman hand pointing gestures from stereo pairs of color video images. The output of the hand recognition stage is processed by a set of color-sensitive neural networks to determine the cartesian location of the target object that is referenced by the pointing gesture. Finally, this information is used to guide a robot to grab the target object and put it at another location that can be specified by a second pointing gesture. The accuracy of the current systemallows to identify the location of the referenced target object to an accuracy of 1 cm in a workspace area of 50x50 cm.
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
On large problems, reinforcement learning systems must use parameterized functionapproximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational resultshave been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned offline. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD().) algorithm when).
Softassign versus Softmax: Benchmarks in Combinatorial Optimization
Gold, Steven, Rangarajan, Anand
Steven Gold Department of Computer Science Yale University New Haven, CT 06520-8285 AnandRangarajan Dept. of Diagnostic Radiology Yale University New Haven, CT 06520-8042 Abstract A new technique, termed soft assign, is applied for the first time to two classic combinatorial optimization problems, the traveling salesmanproblem and graph partitioning. Softassign, which has emerged from the recurrent neural network/statistical physics framework, enforces two-way (assignment) constraints without the use of penalty terms in the energy functions. The softassign can also be generalized from two-way winner-take-all constraints to multiple membership constraints which are required for graph partitioning. Thesoftassign technique is compared to the softmax (Potts glass). Within the statistical physics framework, softmax and a penalty term has been a widely used method for enforcing the two-way constraints common within many combinatorial optimization problems.The benchmarks present evidence that softassign has clear advantages in accuracy, speed, parallelizabilityand algorithmic simplicityover softmax and a penalty term in optimization problems with two-way constraints. 1 Introduction In a series of papers in the early to mid 1980's, Hopfield and Tank introduced techniques which allowed one to solve combinatorial optimization problems with recurrent neural networks [Hopfield and Tank, 1985].
Model Matching and SFMD Computation
Rehfuss, Steven, Hammerstrom, Dan W.
In systems that process sensory data there is frequently a model matching stage where class hypotheses are combined to recognize a complex entity. We introduce a new model of parallelism, the Single Function Multiple Data (SFMD) model, appropriate to this stage. SFMD functionality can be added with small hardware expense to certain existing SIMD architectures, and as an incremental addition to the programming model. Adding SFMD to an SIMD machine will not only allow faster model matching, but also increase its flexibility as a general purpose machine and its scope in performing the initial stages of sensory processing. 1 INTRODUCTION In systems that process sensory data there is frequently a post-classification stage where several independent class hypotheses are combined into the recognition of a more complex entity. Examples include matching word models with a string of observation probabilities, and matching visual object models with collections of edges or other features. Current parallel computer architectures for processing sensory data focus on the classification and pre-classification stages (Hammerstrom 1990).This is reasonable, as those stages likely have the largest potential for speedup through parallel execution. Nonetheless, the model-matching stage is also suitable for parallelism, as each model may be matched independently of the others. We introduce a new style of parallelism, Single Function Multiple Data (SFMD), that is suitable for the model-matching stage.
Neuron-MOS Temporal Winner Search Hardware for Fully-Parallel Data Processing
Shibata, Tadashi, Nakai, Tsutomu, Morimoto, Tatsuo, Kaihara, Ryu, Yamashita, Takeo, Ohmi, Tadahiro
Search for the largest (or the smallest) among a number of input data, Le., the winner-take-all (WTA) action, is an essential part of intelligent data processing such as data retrieval in associative memories [3], vector quantization circuits [4], Kohonen's self-organizing maps [5] etc. In addition to the maximum or minimum search, data sorting also plays an essential role in a number of signal processing such as median filtering in image processing, evolutionary algorithms in optimizing problems [6] and so forth.
The Gamma MLP for Speech Phoneme Recognition
Lawrence, Steve, Tsoi, Ah Chung, Back, Andrew D.
Department of Electrical and Computer Engineering University of Queensland St. Lucia Qld 4072 Australia Abstract We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed byde Vries and Principe (de Vries and Principe, 1992)) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi I1R MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in an substantial reduction in error rates. 1 INTRODUCTION 1.1 THE GAMMA FILTER Infinite Impulse Response (I1R) filters have a significant advantage over Finite Impulse Response(FIR) filters in signal processing: the length of the impulse response is uncoupled from the number of filter parameters.