Technology
Recurrent Cortical Amplification Produces Complex Cell Responses
Chance, Frances S., Nelson, Sacha B., Abbott, L. F.
Cortical amplification has been proposed as a mechanism for enhancing the selectivity of neurons in the primary visual cortex. Less appreciated is the fact that the same form of amplification can also be used to de-tune or broaden selectivity. Using a network model with recurrent cortical circuitry, we propose that the spatial phase invariance of complex cell responses arises through recurrent amplification of feedforward input.
Attentional Modulation of Human Pattern Discrimination Psychophysics Reproduced by a Quantitative Model
Itti, Laurent, Braun, Jochen, Lee, Dale K., Koch, Christof
We previously proposed a quantitative model of early visual processing inprimates, based on non-linearly interacting visual filters and statistically efficient decision. We now use this model to interpret theobserved modulation of a range of human psychophysical thresholds with and without focal visual attention. Our model - calibrated by an automatic fitting procedure - simultaneously reproduces thresholdsfor four classical pattern discrimination tasks, performed while attention was engaged by another concurrent task. Our model then predicts that the seemingly complex improvements of certain thresholds, which we observed when attention was fully available for the discrimination tasks, can best be explained by a strengthening of competition among early visual filters. 1 INTRODUCTION What happens when we voluntarily focus our attention to a restricted part of our visual field? We here investigate the possibility that attention might have a specific computational modulatory effect on early visual processing.
Using Analytic QP and Sparseness to Speed Training of Support Vector Machines
SVMs have empirically been shown to give good generalization performance on a wide variety of problems. However, the use of SVMs is stilI limited to a small group of researchers. One possible reason is that training algorithms for SVMs are slow, especially for large problems. Another explanation is that SVM training algorithms are complex, subtle, and sometimes difficult to implement. This paper describes a new SVM learning algorithm that is easy to implement, often faster, and has better scaling properties than the standard SVM training algorithm. The new SVM learning algorithm is called Sequential Minimal Optimization (or SMO).
Learning Curves for Gaussian Processes
I consider the problem of calculating learning curves (i.e., average generalization performance) of Gaussian processes used for regression. Asimple expression for the generalization error in terms of the eigenvalue decomposition of the covariance function is derived, and used as the starting point for several approximation schemes. I identify where these become exact, and compare with existing bounds on learning curves; the new approximations, which can be used for any input space dimension, generally get substantially closer to the truth. 1 INTRODUCTION: GAUSSIAN PROCESSES Within the neural networks community, there has in the last few years been a good deal of excitement about the use of Gaussian processes as an alternative to feedforward networks [lJ. The advantages of Gaussian processes are that prior assumptions about the problem to be learned are encoded in a very transparent way, and that inference-at least in the case of regression that I will consider-is relatively straightforward. One crucial question for applications is then how'fast' Gaussian processes learn, i.e., how many training examples are needed to achieve a certain level of generalization performance.
Unsupervised Classification with Non-Gaussian Mixture Models Using ICA
Lee, Te-Won, Lewicki, Michael S., Sejnowski, Terrence J.
Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski Howard Hughes Medical Institute Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037, USA {tewon,lewicki,terry}Osalk.edu Abstract We present an unsupervised classification algorithm based on an ICA mixture model. The ICA mixture model assumes that the observed data can be categorized into several mutually exclusive data classes in which the components in each class are generated by a linear mixture of independent sources. The algorithm finds the independent sources, the mixing matrix for each class and also computes the class membership probability for each data point. This approach extends the Gaussian mixture model so that the classes can have non-Gaussian structure. We demonstrate that this method can learn efficient codes to represent images of natural scenes and text.