Country
Event-Driven Simulation of Networks of Spiking Neurons
A fast event-driven software simulator has been developed for simulating large networks of spiking neurons and synapses. The primitive network elements are designed to exhibit biologically realistic behaviors, such as spiking, refractoriness, adaptation, axonal delays, summation of post-synaptic current pulses, and tonic current inputs. The efficient event-driven representation allows large networks to be simulated in a fraction of the time that would be required for a full compartmental-model simulation. Corresponding analog CMOS VLSI circuit primitives have been designed and characterized, so that large-scale circuits may be simulated prior to fabrication. 1 Introduction Artificial neural networks typically use an abstraction of real neuron behaviour, in which the continuously varying mean firing rate of the neuron is presumed to carry the information about the neuron's time-varying state of excitation [1]. This useful simplification allows the neuron's state to be represented as a time-varying continuous-amplitude quantity.
Solvable Models of Artificial Neural Networks
Solvable models of nonlinear learning machines are proposed, and learning in artificial neural networks is studied based on the theory of ordinary differential equations. A learning algorithm is constructed, by which the optimal parameter can be found without any recursive procedure. The solvable models enable us to analyze the reason why experimental results by the error backpropagation often contradict the statistical learning theory.
A Network Mechanism for the Determination of Shape-From-Texture
We propose a computational model for how the cortex discriminates shape and depth from texture. The model consists of four stages: (1) extraction of local spatial frequency, (2) frequency characterization, (3) detection of texture compression by normalization, and (4) integration of the normalized frequency over space. The model accounts for a number of psychophysical observations including experiments based on novel random textures. These textures are generated from white noise and manipulated in Fourier domain in order to produce specific frequency spectra. Simulations with a range of stimuli, including real images, show qualitative and quantitative agreement with human perception. 1 INTRODUCTION There are several physical cues to shape and depth which arise from changes in projection as a surface curves away from view, or recedes in perspective.
How to Choose an Activation Function
Mhaskar, H. N., Micchelli, C. A..
In [10], we have shown that such a network using practically any nonlinear activation function can approximate any continuous function of any number of real variables on any compact set to any desired degree of accuracy. A central question in this theory is the following. If one needs to approximate a function from a known class of functions to a prescribed accuracy, how many neurons will be necessary to accomplish this approximation for all functions in the class?
What Does the Hippocampus Compute?: A Precis of the 1993 NIPS Workshop
What Does the Hippocampus Compute?: A Precis of the 1993 NIPS Workshop Computational models of the hippocampal-region provide an important method for understanding the functional role of this brain system in learning and memory. The presentations in this workshop focused on how modeling can lead to a unified understanding of the interplay among hippocampal physiology, anatomy, and behavior. One approach can be characterized as "top-down" analyses of the neuropsychology of memory, drawing upon brain-lesion studies in animals and humans. Other models take a "bottom-up" approach, seeking to infer emergent computational and functional properties from detailed analyses of circuit connectivity and physiology (see Gluck & Granger, 1993, for a review). Among the issues discussed were: (1) integration of physiological and behavioral theories of hippocampal function, (2) similarities and differences between animal and human studies, (3) representational vs. temporal properties of hippocampaldependent behaviors, (4) rapid vs. incremental learning, (5) mUltiple vs. unitary memory systems, (5) spatial navigation and memory, and (6) hippocampal interaction with other brain systems.
Robust Parameter Estimation and Model Selection for Neural Network Regression
In this paper, it is shown that the conventional back-propagation (BPP) algorithm for neural network regression is robust to leverages (data with:n corrupted), but not to outliers (data with y corrupted). A robust model is to model the error as a mixture of normal distribution. The influence function for this mixture model is calculated and the condition for the model to be robust to outliers is given. EM algorithm [5] is used to estimate the parameter. The usefulness of model selection criteria is also discussed.
How to Choose an Activation Function
Mhaskar, H. N., Micchelli, C. A..
In [10], we have shown that such a network using practically any nonlinear activation function can approximate any continuous function of any number of real variables on any compact set to any desired degree of accuracy. A central question in this theory is the following. If one needs to approximate a function from a known class of functions to a prescribed accuracy, how many neurons will be necessary to accomplish this approximation for all functions in the class?
Connectionist Models for Auditory Scene Analysis
Although the visual and auditory systems share the same basic tasks of informing an organism about its environment, most connectionist work on hearing to date has been devoted to the very different problem of speech recognition. VVe believe that the most fundamental task of the auditory system is the analysis of acoustic signals into components corresponding to individual sound sources, which Bregman has called auditory scene analysis. Computational and connectionist work on auditory scene analysis is reviewed, and the outline of a general model that includes these approaches is described.