Information Technology
Extraction of Temporal Features in the Electrosensory System of Weakly Electric Fish
Gabbiani, Fabrizio, Metzner, Walter, Wessel, Ralf, Koch, Christof
The weakly electric fish, Eigenmannia, generates a quasi sinusoidal, dipole-like electric field at individually fixed frequencies (250 - 600 Hz) by discharging an electric organ located in its tail (see Bullock and Heilgenberg, 1986 for reviews). The fish sense local changes in the electric field by means of two types of tuberous electroreceptors located on the body surface.
Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input
Tang, Akaysha C., Bartels, Andreas M., Sejnowski, Terrence J.
Recently, there has been a vigorous debate concerning the nature of neural coding (Rieke et al. 1996; Stevens and Zador 1995; Shadlen and Newsome 1994). The prevailing view has been that the mean firing rate conveys all information about the sensory stimulus in a spike train and the precise timing of the individual spikes is noise. This belief is, in part, based on a lack of correlation between the precise timing of the spikes and the sensory qualities of the stimulus under study, particularly, on a lack of spike timing repeatability when identical stimulation is delivered. This view has been challenged by a number of recent studies, in which highly repeatable temporal patterns of spikes can be observed both in vivo (Bair and Koch 1996; Abeles et al. 1993) and in vitro (Mainen and Sejnowski 1994). Furthermore, application of information theory to the coding problem in the frog and house fly (Bialek et al. 1991; Bialek and Rieke 1992) suggested that additional information could be extracted from spike timing. In the absence of direct evidence for a timing code in the cerebral cortex, the role of spike timing in neural coding remains controversial.
Why did TD-Gammon Work?
Pollack, Jordan B., Blair, Alan D.
Although TD-Gammon is one of the major successes in machine learning, it has not led to similar impressive breakthroughs in temporal difference learning for other applications or even other games. We were able to replicate some of the success of TD-Gammon, developing a competitive evaluation function on a 4000 parameter feed-forward neural network, without using back-propagation, reinforcement or temporal difference learning methods. Instead we apply simple hill-climbing in a relative fitness environment. These results and further analysis suggest that the surprising success of Tesauro's program had more to do with the co-evolutionary structure of the learning task and the dynamics of the backgammon game itself. 1 INTRODUCTION It took great chutzpah for Gerald Tesauro to start wasting computer cycles on temporal difference learning in the game of Backgammon (Tesauro, 1992). After all, the dream of computers mastering a domain by self-play or "introspection" had been around since the early days of AI, forming part of Samuel's checker player (Samuel, 1959) and used in Donald Michie's MENACE tictac-toe learner (Michie, 1961).
A Spike Based Learning Neuron in Analog VLSI
Häfliger, Philipp, Mahowald, Misha, Watts, Lloyd
Many popular learning rules are formulated in terms of continuous, analog inputs and outputs. Biological systems, however, use action potentials, which are digital-amplitude events that encode analog information in the inter-event interval. Action-potential representations are now being used to advantage in neuromorphic VLSI systems as well. We report on a simple learning rule, based on the Riccati equation described by Kohonen [1], modified for action-potential neuronal outputs. We demonstrate this learning rule in an analog VLSI chip that uses volatile capacitive storage for synaptic weights. We show that our time-dependent learning rule is sufficient to achieve approximate weight normalization and can detect temporal correlations in spike trains.
A Silicon Model of Amplitude Modulation Detection in the Auditory Brainstem
Schaik, André van, Fragnière, Eric, Vittoz, Eric A.
Detectim of the periodicity of amplitude modulatim is a major step in the determinatim of the pitch of a SOODd. In this article we will present a silicm model that uses synchrroicity of spiking neurms to extract the fundamental frequency of a SOODd. It is based m the observatim that the so called'Choppers' in the mammalian Cochlear Nucleus synchrmize well for certain rates of amplitude modulatim, depending m the cell's intrinsic chopping frequency. Our silicm model uses three different circuits, i.e., an artificial cochlea, an Inner Hair Cell circuit, and a spiking neuron circuit
MIMIC: Finding Optima by Estimating Probability Densities
Bonet, Jeremy S. De, Jr., Charles Lee Isbell, Viola, Paul A.
In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space and, in turn, to refine our estimate ofthe structure.
Adaptive On-line Learning in Changing Environments
Murata, Noboru, Müller, Klaus-Robert, Ziehe, Andreas, Amari, Shun-ichi
An adaptive online algorithm extending the learning of learning idea is proposed and theoretically motivated. Relying only on gradient flow information it can be applied to learning continuous functions or distributions, even when no explicit loss function is given and the Hessian is not available. Its efficiency is demonstrated for a non-stationary blind separation task of acoustic signals.
Learning Appearance Based Models: Mixtures of Second Moment Experts
Bregler, Christoph, Malik, Jitendra
This paper describes a new technique for object recognition based on learning appearance models. The image is decomposed into local regions which are described by a new texture representation called "Generalized Second Moments" that are derived from the output of multiscale, multiorientation filter banks. Class-characteristic local texture features and their global composition is learned by a hierarchical mixture of experts architecture (Jordan & Jacobs). The technique is applied to a vehicle database consisting of 5 general car categories (Sedan, Van with backdoors, Van without backdoors, old Sedan, and Volkswagen Bug). This is a difficult problem with considerable in-class variation.
Removing Noise in On-Line Search using Adaptive Batch Sizes
Stochastic (online) learning can be faster than batch learning. However, at late times, the learning rate must be annealed to remove the noise present in the stochastic weight updates. In this annealing phase, the convergence rate (in mean square) is at best proportional to l/T where T is the number of input presentations. An alternative is to increase the batch size to remove the noise. In this paper we explore convergence for LMS using 1) small but fixed batch sizes and 2) an adaptive batch size. We show that the best adaptive batch schedule is exponential and has a rate of convergence which is the same as for annealing, Le., at best proportional to l/T.