Not enough data to create a plot.
Try a different view from the menu above.
Multi-modular Associative Memory
Levy, Nir, Horn, David, Ruppin, Eytan
Motivated by the findings of modular structure in the association cortex, we study a multi-modular model of associative memory that can successfully store memory patterns with different levels of activity. Weshow that the segregation of synaptic conductances into intra-modular linear and inter-modular nonlinear ones considerably enhances the network's memory retrieval performance. Compared with the conventional, single-module associative memory network, the multi-modular network has two main advantages: It is less susceptible todamage to columnar input, and its response is consistent with the cognitive data pertaining to category specific impairment. 1 Introduction Cortical modules were observed in the somatosensory and visual cortices a few decades ago. These modules differ in their structure and functioning but are likely to be an elementary unit of processing in the mammalian cortex. Within each module the neurons are interconnected.
A Generic Approach for Identification of Event Related Brain Potentials via a Competitive Neural Network Structure
Lange, Daniel H., Siegelmann, Hava T., Pratt, Hillel, Inbar, Gideon F.
We present a novel generic approach to the problem of Event Related Potential identification and classification, based on a competitive Neural Netarchitecture. The network weights converge to the embedded signal patterns, resulting in the formation of a matched filter bank. The network performance is analyzed via a simulation study, exploring identification robustness under low SNR conditions and compared to the expected performance from an information theoretic perspective. The classifier is applied to real event-related potential data recorded during a classic oddball type paradigm; for the first time, withinsession variablesignal patterns are automatically identified, dismissing the strong and limiting requirement of a-priori stimulus-related selective grouping of the recorded data.
Competitive On-line Linear Regression
We apply a general algorithm for merging prediction strategies (the Aggregating Algorithm) to the problem of linear regression with the square loss; our main assumption is that the response variable is bounded. It turns out that for this particular problem the Aggregating Algorithmresembles, but is slightly different from, the wellknown ridgeestimation procedure. From general results about the Aggregating Algorithm we deduce a guaranteed bound on the difference betweenour algorithm's performance and the best, in some sense, linear regression function's performance. We show that the AA attains the optimal constant in our bound, whereas the constant attainedby the ridge regression procedure in general can be 4 times worse. 1 INTRODUCTION The usual approach to regression problems is to assume that the data are generated bysome stochastic mechanism and make some, typically very restrictive, assumptions about that stochastic mechanism. In recent years, however, a different approach to this kind of problems was developed (see, e.g., DeSantis et al. [2], Littlestone andWarmuth [7]): in our context, that approach sets the goal of finding an online algorithm that performs not much worse than the best regression function foundoff-line; in other words, it replaces the usual statistical analyses by the competitive analysis of online algorithms. DeSantis et al. [2] performed a competitive analysis of the Bayesian merging scheme for the log-loss prediction game; later Littlestone and Warmuth [7] and Vovk [10] introduced an online algorithm (called the Weighted Majority Algorithm by the Competitive Online Linear Regression 365 former authors) for the simple binary prediction game. These two algorithms (the Bayesian merging scheme and the Weighted Majority Algorithm) are special cases of the Aggregating Algorithm (AA) proposed in [9, 11]. The AA is a member of a wide family of algorithms called "multiplicative weight" or "exponential weight" algorithms. Closerto the topic of this paper, Cesa-Bianchi et al. [1) performed a competitive analysis, under the square loss, of the standard Gradient Descent Algorithm and Kivinen and Warmuth [6] complemented it by a competitive analysis of a modification ofthe Gradient Descent, which they call the Exponentiated Gradient Algorithm.
Learning Generative Models with the Up Propagation Algorithm
Oh, Jong-Hoon, Seung, H. Sebastian
Up-propagation is an algorithm for inverting and learning neural network generative models Sensory input is processed by inverting a model that generates patterns from hidden variables using topdown connections The inversion process is iterative utilizing a negative feedback loop that depends on an error signal propagated by bottomup connections The error signal is also used to learn the generative model from examples The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.
Recovering Perspective Pose with a Dual Step EM Algorithm
Cross, Andrew D. J., Hancock, Edwin R.
This paper describes a new approach to extracting 3D perspective structure from 2D point-sets. The novel feature is to unify the tasks of estimating transformation geometry and identifying pointcorrespondence matches.Unification is realised by constructing a mixture model over the bipartite graph representing the correspondence matchand by effecting optimisation using the EM algorithm. According to our EM framework the probabilities of structural correspondence gatecontributions to the expected likelihood function used to estimate maximum likelihood perspective pose parameters. This provides a means of rejecting structural outliers.
Modeling Acoustic Correlations by Factor Analysis
Saul, Lawrence K., Rahim, Mazin G.
Hidden Markov models (HMMs) for automatic speech recognition rely on high dimensional feature vectors to summarize the shorttime propertiesof speech. Correlations between features can arise when the speech signal is non-stationary or corrupted by noise. We investigate how to model these correlations using factor analysis, a statistical method for dimensionality reduction. Factor analysis uses a small number of parameters to model the covariance structure ofhigh dimensional data. These parameters are estimated by an Expectation-Maximization (EM) algorithm that can be embedded inthe training procedures for HMMs.
Monotonic Networks
Monotonicity is a constraint which arises in many application domains. Wepresent a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue offunctional form. A straightforward method for implementing and training a monotonic network is described. Monotonic networks are proven to be universal approximators of continuous, differentiable monotonicfunctions. We apply monotonic networks to a real-world task in corporate bond rating prediction and compare them to other approaches. 1 Introduction Several recent papers in machine learning have emphasized the importance of priors anddomain-specific knowledge.