Country
From Regularization Operators to Support Vector Kernels
Smola, Alex J., Schölkopf, Bernhard
Support Vector (SV) Machines for pattern recognition, regression estimation and operator inversion exploit the idea of transforming into a high dimensional feature space where they perform a linear algorithm. Instead of evaluating this map explicitly, one uses Hilbert Schmidt Kernels k(x, y) which correspond to dot products of the mapped data in high dimensional space, i.e. k(x, y) ( I (x) · I (y))
An Analog VLSI Neural Network for Phase-based Machine Vision
Shi, Bertram Emil, Hui, Kwok Fai
Gabor filters are used as preprocessing stages for different tasks in machine vision and image processing. Their use has been partially motivated by findings that two dimensional Gabor filters can be used to model receptive fields of orientation selective neurons in the visual cortex (Daugman, 1980) and three dimensional spatiotemporal Gabor filters can be used to model biological image motion analysis (Adelson, 1985). A Gabor filter has a complex valued impulse response which is a complex exponential modulated by a Gaussian function.
A Generic Approach for Identification of Event Related Brain Potentials via a Competitive Neural Network Structure
Lange, Daniel H., Siegelmann, Hava T., Pratt, Hillel, Inbar, Gideon F.
We present a novel generic approach to the problem of Event Related Potential identification and classification, based on a competitive N eural Net architecture. The network weights converge to the embedded signal patterns, resulting in the formation of a matched filter bank. The network performance is analyzed via a simulation study, exploring identification robustness under low SNR conditions and compared to the expected performance from an information theoretic perspective. The classifier is applied to real event-related potential data recorded during a classic oddball type paradigm; for the first time, withinsession variable signal patterns are automatically identified, dismissing the strong and limiting requirement of a-priori stimulus-related selective grouping of the recorded data.
How to Dynamically Merge Markov Decision Processes
Singh, Satinder P., Cohn, David
We are frequently called upon to perform multiple tasks that compete for our attention and resource. Often we know the optimal solution to each task in isolation; in this paper, we describe how this knowledge can be exploited to efficiently find good solutions for doing the tasks in parallel. We formulate this problem as that of dynamically merging multiple Markov decision processes (MDPs) into a composite MDP, and present a new theoretically-sound dynamic programming algorithm for finding an optimal policy for the composite MDP. We analyze various aspects of our algorithm and illustrate its use on a simple merging problem. Every day, we are faced with the problem of doing mUltiple tasks in parallel, each of which competes for our attention and resource. If we are running a job shop, we must decide which machines to allocate to which jobs, and in what order, so that no jobs miss their deadlines. If we are a mail delivery robot, we must find the intended recipients of the mail while simultaneously avoiding fixed obstacles (such as walls) and mobile obstacles (such as people), and still manage to keep ourselves sufficiently charged up. Frequently we know how to perform each task in isolation; this paper considers how we can take the information we have about the individual tasks and combine it to efficiently find an optimal solution for doing the entire set of tasks in parallel. More importantly, we describe a theoretically-sound algorithm for doing this merging dynamically; new tasks (such as a new job arrival at a job shop) can be assimilated online into the solution being found for the ongoing set of simultaneous tasks.
The Asymptotic Convergence-Rate of Q-learning
Q-Iearning is a popular reinforcement learning (RL) algorithm whose convergence is well demonstrated in the literature (Jaakkola et al., 1994; Tsitsiklis, 1994; Littman and Szepesvari, 1996; Szepesvari and Littman, 1996). Our aim in this paper is to provide an upper bound for the convergence rate of (lookup-table based) Q-Iearning algorithms. Although, this upper bound is not strict, computer experiments (to be presented elsewhere) and the form of the lemma underlying the proof indicate that the obtained upper bound can be made strict by a slightly more complicated definition for R. Our results extend to learning on aggregated states (see (Singh et al., 1995» and other related algorithms which admit a certain form of asynchronous stochastic approximation (see (Szepesv iri and Littman, 1996». Present address: Associative Computing, Inc., Budapest, Konkoly Thege M. u. 29-33, HUNGARY-1121 The Asymptotic Convergence-Rate of Q-leaming
A Hippocampal Model of Recognition Memory
O', Reilly, Randall C., Norman, Kenneth A., McClelland, James L.
A rich body of data exists showing that recollection of specific information makes an important contribution to recognition memory, which is distinct from the contribution of familiarity, and is not adequately captured by existing unitary memory models. Furthennore, neuropsychological evidence indicates that recollection is sub served by the hippocampus. We present a model, based largely on known features of hippocampal anatomy and physiology, that accounts for the following key characteristics of recollection: 1) false recollection is rare (i.e., participants rarely claim to recollect having studied nonstudied items), and 2) increasing interference leads to less recollection but apparently does not compromise the quality of recollection (i.e., the extent to which recollected infonnation veridically reflects events that occurred at study).
Radial Basis Functions: A Bayesian Treatment
Barber, David, Schottky, Bernhard
Bayesian methods have been successfully applied to regression and classification problems in multi-layer perceptrons. We present a novel application of Bayesian techniques to Radial Basis Function networks by developing a Gaussian approximation to the posterior distribution which, for fixed basis function widths, is analytic in the parameters. The setting of regularization constants by crossvalidation is wasteful as only a single optimal parameter estimate is retained. We treat this issue by assigning prior distributions to these constants, which are then adapted in light of the data under a simple re-estimation formula. 1 Introduction Radial Basis Function networks are popular regression and classification tools[lO]. For fixed basis function centers, RBFs are linear in their parameters and can therefore be trained with simple one shot linear algebra techniques[lO]. The use of unsupervised techniques to fix the basis function centers is, however, not generally optimal since setting the basis function centers using density estimation on the input data alone takes no account of the target values associated with that data. Ideally, therefore, we should include the target values in the training procedure[7, 3, 9]. Unfortunately, allowing centers to adapt to the training targets leads to the RBF being a nonlinear function of its parameters, and training becomes more problematic. Most methods that perform supervised training of RBF parameters minimize the ·Present address: SNN, University of Nijmegen, Geert Grooteplein 21, Nijmegen, The Netherlands.
Multi-modular Associative Memory
Levy, Nir, Horn, David, Ruppin, Eytan
Motivated by the findings of modular structure in the association cortex, we study a multi-modular model of associative memory that can successfully store memory patterns with different levels of activity. We show that the segregation of synaptic conductances into intra-modular linear and inter-modular nonlinear ones considerably enhances the network's memory retrieval performance. Compared with the conventional, single-module associative memory network, the multi-modular network has two main advantages: It is less susceptible to damage to columnar input, and its response is consistent with the cognitive data pertaining to category specific impairment. 1 Introduction Cortical modules were observed in the somatosensory and visual cortices a few decades ago. These modules differ in their structure and functioning but are likely to be an elementary unit of processing in the mammalian cortex. Within each module the neurons are interconnected.