Country
Spiking Boltzmann Machines
Hinton, Geoffrey E., Brown, Andrew D.
We first show how to represent sharp posterior probability distributions usingreal valued coefficients on broadly-tuned basis functions. Then we show how the precise times of spikes can be used to convey thereal-valued coefficients on the basis functions quickly and accurately. Finally we describe a simple simulation in which spiking neuronslearn to model an image sequence by fitting a dynamic generative model. 1 Population codes and energy landscapes A perceived object is represented in the brain by the activities of many neurons, but there is no general consensus on how the activities of individual neurons combine to represent the multiple properties of an object. We start by focussing on the case of a single object that has multiple instantiation parameters such as position, velocity, size and orientation. We assume that each neuron has an ideal stimulus in the space of instantiation parameters and that its activation rate or probability of activation falls off monotonically in all directions as the actual stimulus departs from this ideal.
Perceptual Organization Based on Temporal Dynamics
A figure-ground segregation network is proposed based on a novel boundary pair representation. Nodes in the network are boundary segmentsobtained through local grouping. Each node is excitatorily coupledwith the neighboring nodes that belong to the same region, and inhibitorily coupled with the corresponding paired node. Gestalt grouping rules are incorporated by modulating connections. Thestatus of a node represents its probability being figural and is updated according to a differential equation.
An Environment Model for Nonstationary Reinforcement Learning
Choi, Samuel P. M., Yeung, Dit-Yan, Zhang, Nevin Lianwen
Reinforcement learning in nonstationary environments is generally regarded as an important and yet difficult problem. This paper partially addresses the problem by formalizing a subclass of nonstationary environments.The environment model, called hidden-mode Markov decision process (HM-MDP), assumes that environmental changes are always confined to a small number of hidden modes.
Bayesian Averaging is Well-Temperated
Often a learning problem has natural quantitative measure of generalization. If a loss function is defined the natural measure is the generalization error, i.e., the expected loss on a random sample independent of the training set. Generalizability is a key topic of learning theory and much progress has been reported. Analytic results for a broad class of machines can be found in the litterature [8, 12, 9, 10] describing the asymptotic generalization ability of supervised algorithms that are continuously parameterized. Asymptotic bounds on generalization for general machines havebeen advocated by Vapnik [11]. Generalization results valid for finite training sets can only be obtained for specific learning machines, see e.g.
Bifurcation Analysis of a Silicon Neuron
Patel, Girish N., Cymbalyuk, Gennady S., Calabrese, Ronald L., DeWeerth, Stephen P.
We have developed a VLSI silicon neuron and a corresponding mathematical modelthat is a two state-variable system. We describe the circuit implementation and compare the behaviors observed in the silicon neuron and the mathematical model. We also perform bifurcation analysis ofthe mathematical model by varying the externally applied current and show that the behaviors exhibited by the silicon neuron under corresponding conditionsare in good agreement to those predicted by the bifurcation analysis.
Population Decoding Based on an Unfaithful Model
Wu, Si, Nakahara, Hiroyuki, Murata, Noboru, Amari, Shun-ichi
We study a population decoding paradigm in which the maximum likelihood inferenceis based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known, or because a simplified decoding modelis preferred for saving computational cost. We consider an unfaithful decoding model which neglects the pairwise correlation between neuronal activities, and prove that UMLI is asymptotically efficient whenthe neuronal correlation is uniform or of limited-range. The performance of UMLI is compared with that of the maximum likelihood inference based on a faithful model and that of the center of mass decoding method.It turns out that UMLI has advantages of decreasing the computational complexity remarkablely and maintaining a high-level decoding accuracy at the same time. The effect of correlation on the decoding accuracy is also discussed.
Local Probability Propagation for Factor Analysis
Ever since Pearl's probability propagation algorithm in graphs with cycles was shown to produce excellent results for error-correcting decoding a few years ago, we have been curious about whether local probability propagation could be used successfully for machine learning.One of the simplest adaptive models is the factor analyzer, which is a two-layer network that models bottom layer sensory inputs as a linear combination of top layer factors plus independent Gaussiansensor noise. We show that local probability propagation in the factor analyzer network usually takes just a few iterations to perform accurate inference, even in networks with 320 sensors and 80 factors. We derive an expression for the algorithm's fixed point and show that this fixed point matches the exact solution ina variety of networks, even when the fixed point is unstable.