Country
A Topographic Product for the Optimization of Self-Organizing Feature Maps
Bauer, Hans-Ulrich, Pawelzik, Klaus, Geisel, Theo
We present a topographic product which measures the preservation of neighborhood relations as a criterion to optimize the output space topology of the map with regard to the global dimensionality DA as well as to the dimensions inthe individual directions. We test the topographic product method not only on synthetic mapping examples, but also on speech data.
Markov Random Fields Can Bridge Levels of Abstraction
Cooper, Paul R., Prokopowicz, Peter N.
Network vision systems must make inferences from evidential information acrosslevels of representational abstraction, from low level invariants, through intermediate scene segments, to high level behaviorally relevant object descriptions. This paper shows that such networks can be realized as Markov Random Fields (MRFs). We show first how to construct an MRF functionally equivalent to a Hough transform parameter network, thus establishing a principled probabilistic basis for visual networks. Second, weshow that these MRF parameter networks are more capable and flexible than traditional methods. In particular, they have a well-defined probabilistic interpretation, intrinsically incorporate feedback, and offer richer representations and decision capabilities.
The Clusteron: Toward a Simple Abstraction for a Complex Neuron
The nature of information processing in complex dendritic trees has remained an open question since the origin of the neuron doctrine 100 years ago. With respect to learning, for example, it is not known whether a neuron is best modeled as 35 36 Mel a pseudo-linear unit, equivalent in power to a simple Perceptron, or as a general nonlinear learning device, equivalent in power to a multi-layered network. In an attempt tocharacterize the input-output behavior of a whole dendritic tree containing voltage-dependent membrane mechanisms, a recent compartmental modeling study in an anatomically reconstructed neocortical pyramidal cell (anatomical data from Douglas et al., 1991; "NEURON" simulation package provided by Michael Hines and John Moore) showed that a dendritic tree rich in NMDA-type synaptic channels isselectively responsive to spatially clustered, as opposed to diffuse, pattens of synaptic activation (Mel, 1992). For example, 100 synapses which were simultaneously activatedat 100 randomly chosen locations about the dendritic arbor were less effective at firing the cell than 100 synapses activated in groups of 5, at each of 20 randomly chosen dendritic locations. The cooperativity among the synapses in each group is due to the voltage dependence of the NMDA channel: Each activated NMDA synapse becomes up to three times more effective at injecting synaptic current whenthe post-synaptic membrane is locally depolarized by 30-40 m V from the resting potential.
Linear Operator for Object Recognition
Visual object recognition involves the identification of images of 3-D objects seenfrom arbitrary viewpoints. We suggest an approach to object recognition in which a view is represented as a collection of points given by their location in the image. An object is modeled by a set of 2-D views together with the correspondence between the views. We show that any novel view of the object can be expressed as a linear combination of the stored views. Consequently, we build a linear operator that distinguishes between views of a specific object and views of other objects.
Models Wanted: Must Fit Dimensions of Sleep and Dreaming
Hobson, J. Allan, Mamelak, Adam N., Sutton, Jeffrey P.
During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper.
Constant-Time Loading of Shallow 1-Dimensional Networks
The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e.
Splines, Rational Functions and Neural Networks
Williamson, Robert C., Bartlett, Peter L.
Connections between spline approximation, approximation with rational functions, and feedforward neural networks are studied. The potential improvement in the degree of approximation in going from single to two hidden layer networks is examined. Some results of Birman and Solomjak regarding the degree of approximation achievable when knot positions are chosen on the basis of the probability distribution of examples rather than the function values are extended.
Illumination and View Position in 3D Visual Recognition
It is shown that both changes in viewing position and illumination conditions can be compensated for, prior to recognition, using combinations of images taken from different viewing positions and different illumination conditions. It is also shown that, in agreement with psychophysical findings, the computation requires at least a sign-bit image as input - contours alone are not sufficient. 1 Introduction The task of visual recognition is natural and effortless for biological systems, yet the problem of recognition has been proven to be very difficult to analyze from a computational point of view. The fundamental reason is that novel images of familiar objects are often not sufficiently similar to previously seen images of that object. Assuming a rigid and isolated object in the scene, there are two major sources for this variability: geometric and photometric. The geometric source of variability comes from changes of view position. A 3D object can be viewed from a variety of directions, each resulting with a different 2D projection. The difference is significant, even for modest changes in viewing positions, and can be demonstrated by superimposing those projections (see Figure 1, first row second image). Much attention has been given to this problem in the visual recognition literature ([9], and references therein), and recent results show that one can compensate for changes in viewing position by generating novel views from a small number of model views of the object [10, 4, 8].