Country
Coupled Dynamics of Fast Neurons and Slow Interactions
Coolen, A.C.C., Penney, R. W., Sherrington, D.
A simple model of coupled dynamics of fast neurons and slow interactions, modelling self-organization in recurrent neural networks, leads naturally to an effective statistical mechanics characterized by a partition function which is an average over a replicated system. This is reminiscent of the replica trick used to study spin-glasses, but with the difference that the number of replicas has a physical meaning as the ratio of two temperatures and can be varied throughout the whole range of real values. The model has interesting phase consequences as a function of varying this ratio and external stimuli, and can be extended to a range of other models. As the basic archetypal model we consider a system of Ising spin neurons (J'i E {-I, I}, i E {I,..., N}, interacting via continuous-valued symmetric interactions, Iij, which themselves evolve in response to the states of the neurons. JijO"iO"j (2) i j and the subscript {Jij} indicates that the {Jij} are to be considered as quenched variables.
Classifying Hand Gestures with a View-Based Distributed Representation
Darrell, Trevor J., Pentland, Alex P.
We present a method for learning, tracking, and recognizing human hand gestures recorded by a conventional CCD camera without any special gloves or other sensors. A view-based representation is used to model aspects of the hand relevant to the trained gestures, and is found using an unsupervised clustering technique. We use normalized correlation networks, with dynamic time warping in the temporal domain, as a distance function for unsupervised clustering. Views are computed separably for space and time dimensions; the distributed response of the combination of these units characterizes the input data with a low dimensional representation. A supervised classification stage uses labeled outputs of the spatiotemporal units as training data. Our system can correctly classify gestures in real time with a low-cost image processing accelerator.
Unsupervised Learning of Mixtures of Multiple Causes in Binary Data
This paper presents a formulation for unsupervised learning of clusters reflecting multiple causal structure in binary data. Unlike the standard mixture model, a multiple cause model accounts for observed data by combining assertions from many hidden causes, each of which can pertain to varying degree to any subset of the observable dimensions. A crucial issue is the mixing-function for combining beliefs from different cluster-centers in order to generate data reconstructions whose errors are minimized both during recognition and learning. We demonstrate a weakness inherent to the popular weighted sum followed by sigmoid squashing, and offer an alternative form of the nonlinearity. Results are presented demonstrating the algorithm's ability successfully to discover coherent multiple causal representat.ions of noisy test data and in images of printed characters. 1 Introduction The objective of unsupervised learning is to identify patterns or features reflecting underlying regularities in data. Single-cause techniques, including the k-means algorithm and the standard mixture-model (Duda and Hart, 1973), represent clusters of data points sharing similar patterns of Is and Os under the assumption that each data point belongs to, or was generated by, one and only one cluster-center; output activity is constrained to sum to 1. In contrast, a multiple-cause model permits more than one cluster-center to become fully active in accounting for an observed data vector.
A Hodgkin-Huxley Type Neuron Model That Learns Slow Non-Spike Oscillation
Doya, Kenji, Selverston, Allen I., Rowat, Peter F.
A gradient descent algorithm for parameter estimation which is similar to those used for continuous-time recurrent neural networks was derived for Hodgkin-Huxley type neuron models. Using membrane potential trajectories as targets, the parameters (maximal conductances, thresholds and slopes of activation curves, time constants) were successfully estimated. The algorithm was applied to modeling slow non-spike oscillation of an identified neuron in the lobster stomatogastric ganglion. A model with three ionic currents was trained with experimental data. It revealed a novel role of A-current for slow oscillation below -50 mY. 1 INTRODUCTION Conductance-based neuron models, first formulated by Hodgkin and Huxley [10], are commonly used for describing biophysical mechanisms underlying neuronal behavior.
Illumination-Invariant Face Recognition with a Contrast Sensitive Silicon Retina
Buhmann, Joachim M., Lades, Martin, Eeckman, Frank
We report face recognition results under drastically changing lighting conditions for a computer vision system which concurrently uses a contrast sensitive silicon retina and a conventional, gain controlled CCO camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on -chip preprocessing by the silicon retina the CCO images have been "digitally preprocessed" with a bandpass filter to adjust the power spectrum. The silicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates image data enriched with object-constant features.
Odor Processing in the Bee: A Preliminary Study of the Role of Central Input to the Antennal Lobe
Linster, Christiane, Marsan, David, Masson, Claudine, Kerszberg, Michel
Based on precise anatomical data of the bee's olfactory system, we propose an investigation of the possible mechanisms of modulation and control between the two levels of olfactory information processing: the antennallobe glomeruli and the mushroom bodies. We use simplified neurons, but realistic architecture. As a first conclusion, we postulate that the feature extraction performed by the antennallobe (glomeruli and interneurons) necessitates central input from the mushroom bodies for fine tuning.
Structural and Behavioral Evolution of Recurrent Networks
Saunders, Gregory M., Angeline, Peter J., Pollack, Jordan B.
This paper introduces GNARL, an evolutionary program which induces recurrent neural networks that are structurally unconstrained. In contrast to constructive and destructive algorithms, GNARL employs a population of networks and uses a fitness function's unsupervised feedback to guide search through network space. Annealing is used in generating both gaussian weight changes and structural modifications. Applying GNARL to a complex search and collection task demonstrates that the system is capable of inducing networks with complex internal dynamics.
Estimating analogical similarity by dot-products of Holographic Reduced Representations
Gentner and Markman (1992) suggested that the ability to deal with analogy will be a "Watershed or Waterloo" for connectionist models. They identified "structural alignment" as the central aspect of analogy making. They noted the apparent ease with which people can perform structural alignment in a wide variety of tasks and were pessimistic about the prospects for the development of a distributed connectionist model that could be useful in performing structural alignment. In this paper I describe how Holographic Reduced Representations (HRRs) (Plate, 1991; Plate, 1994), a fixed-width distributed representation for nested structures, can be used to obtain fast estimates of analogical similarity.
The Role of MT Neuron Receptive Field Surrounds in Computing Object Shape from Velocity Fields
Buracas, G. T., Albright, T. D.
Hoever, the relation between our ability to perceive structure in stimuli, simulating 3-D objects, and neuronal properties has not been addressed up to date. Here we discuss a possibility, that area MT RF surrounds might be involved in shape-frommotion perception. We introduce a simplifying model of MT neurons and analyse the implications to SFM problem solving. 2 REDEFINING THE SFM PROBLEM 2.1 RELATIVE MOTION AS A CUE FOR RELATIVE DEPTH Since Helmholtz motion parallax is known to be a powerful cue providing information about both the structure of the surrounding environment and the direction of self-motion. On the other hand, moving objects also induce velocity fields allowing judgement about their shapes. We can capture both cases by assuming that an observer is tracking a point on a surface of interest.
Event-Driven Simulation of Networks of Spiking Neurons
A fast event-driven software simulator has been developed for simulating large networks of spiking neurons and synapses. The primitive network elements are designed to exhibit biologically realistic behaviors, such as spiking, refractoriness, adaptation, axonal delays, summation of post-synaptic current pulses, and tonic current inputs. The efficient event-driven representation allows large networks to be simulated in a fraction of the time that would be required for a full compartmental-model simulation. Corresponding analog CMOS VLSI circuit primitives have been designed and characterized, so that large-scale circuits may be simulated prior to fabrication. 1 Introduction Artificial neural networks typically use an abstraction of real neuron behaviour, in which the continuously varying mean firing rate of the neuron is presumed to carry the information about the neuron's time-varying state of excitation [1]. This useful simplification allows the neuron's state to be represented as a time-varying continuous-amplitude quantity.