Country
Active Data Clustering
Hofmann, Thomas, Buhmann, Joachim M.
Active data clustering is a novel technique for clustering of proximity datawhich utilizes principles from sequential experiment design in order to interleave data generation and data analysis. The proposed activedata sampling strategy is based on the expected value of information, a concept rooting in statistical decision theory. This is considered to be an important step towards the analysis of largescale datasets, because it offers a way to overcome the inherent data sparseness of proximity data.
Nonlinear Markov Networks for Continuous Variables
Hofmann, Reimar, Tresp, Volker
We address the problem oflearning structure in nonlinear Markov networks with continuous variables. This can be viewed as non-Gaussian multidimensional densityestimation exploiting certain conditional independencies in the variables. Markov networks are a graphical way of describing conditional independencieswell suited to model relationships which do not exhibit a natural causal ordering. We use neural network structures to model the quantitative relationships between variables.
Multi-modular Associative Memory
Levy, Nir, Horn, David, Ruppin, Eytan
Motivated by the findings of modular structure in the association cortex, we study a multi-modular model of associative memory that can successfully store memory patterns with different levels of activity. We show that the segregation of synaptic conductances into intra-modular linear and inter-modular nonlinear ones considerably enhances the network's memory retrieval performance. Compared with the conventional, single-module associative memory network, the multi-modular network has two main advantages: It is less susceptible to damage to columnar input, and its response is consistent with the cognitive data pertaining to category specific impairment. 1 Introduction Cortical modules were observed in the somatosensory and visual cortices a few decades ago. These modules differ in their structure and functioning but are likely to be an elementary unit of processing in the mammalian cortex. Within each module the neurons are interconnected.
Learning Continuous Attractors in Recurrent Networks
One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn.
Nonlinear Markov Networks for Continuous Variables
Hofmann, Reimar, Tresp, Volker
We address the problem oflearning structure in nonlinear Markov networks with continuous variables. This can be viewed as non-Gaussian multidimensional density estimation exploiting certain conditional independencies in the variables. Markov networks are a graphical way of describing conditional independencies well suited to model relationships which do not exhibit a natural causal ordering. We use neural network structures to model the quantitative relationships between variables. The main focus in this paper will be on learning the structure for the purpose of gaining insight into the underlying process. Using two data sets we show that interesting structures can be found using our approach. Inference will be briefly addressed.
Competitive On-line Linear Regression
We apply a general algorithm for merging prediction strategies (the Aggregating Algorithm) to the problem of linear regression with the square loss; our main assumption is that the response variable is bounded. It turns out that for this particular problem the Aggregating Algorithm resembles, but is slightly different from, the wellknown ridge estimation procedure. From general results about the Aggregating Algorithm we deduce a guaranteed bound on the difference between our algorithm's performance and the best, in some sense, linear regression function's performance. We show that the AA attains the optimal constant in our bound, whereas the constant attained by the ridge regression procedure in general can be 4 times worse. 1 INTRODUCTION The usual approach to regression problems is to assume that the data are generated by some stochastic mechanism and make some, typically very restrictive, assumptions about that stochastic mechanism. In recent years, however, a different approach to this kind of problems was developed (see, e.g., DeSantis et al. [2], Littlestone and Warmuth [7]): in our context, that approach sets the goal of finding an online algorithm that performs not much worse than the best regression function found off-line; in other words, it replaces the usual statistical analyses by the competitive analysis of online algorithms. DeSantis et al. [2] performed a competitive analysis of the Bayesian merging scheme for the log-loss prediction game; later Littlestone and Warmuth [7] and Vovk [10] introduced an online algorithm (called the Weighted Majority Algorithm by the Competitive Online Linear Regression 365 former authors) for the simple binary prediction game. These two algorithms (the Bayesian merging scheme and the Weighted Majority Algorithm) are special cases of the Aggregating Algorithm (AA) proposed in [9, 11]. The AA is a member of a wide family of algorithms called "multiplicative weight" or "exponential weight" algorithms. Closer to the topic of this paper, Cesa-Bianchi et al. [1) performed a competitive analysis, under the square loss, of the standard Gradient Descent Algorithm and Kivinen and Warmuth [6] complemented it by a competitive analysis of a modification of the Gradient Descent, which they call the Exponentiated Gradient Algorithm.
Characterizing Neurons in the Primary Auditory Cortex of the Awake Primate Using Reverse Correlation
DeCharms, R. Christopher, Merzenich, Michael
While the understanding of the functional role of different classes of neurons in the awake primary visual cortex has been extensively studied since the time of Hubel and Wiesel (Hubel and Wiesel, 1962), our understanding of the feature selectivity and functional role of neurons in the primary auditory cortex is much farther from complete. Moving bars have long been recognized as an optimal stimulus for many visual cortical neurons, and this finding has recently been confirmed and extended in detail using reverse correlation methods (Jones and Palmer, 1987; Reid and Alonso, 1995; Reid et al., 1991; llingach et al., 1997). In this study, we recorded from neurons in the primary auditory cortex of the awake primate, and used a novel reverse correlation technique to compute receptive fields (or preferred stimuli), encompassing both multiple frequency components and ongoing time. These spectrotemporal receptive fields make clear that neurons in the primary auditory cortex, as in the primary visual cortex, typically show considerable structure in their feature processing properties, often including multiple excitatory and inhibitory regions in their receptive fields. These neurons can be sensitive to stimulus edges in frequency composition or in time, and sensitive to stimulus transitions such as changes in frequency. These neurons also show strong responses and selectivity to continuous frequency modulated stimuli analogous to visual drifting gratings.
Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report
Landauer, Thomas K., Laham, Darrell, Foltz, Peter W.
Singular value decomposition (SVD) can be viewed as a method for unsupervised training of a network that associates two classes of events reciprocally by linear connections through a single hidden layer. SVD was used to learn and represent relations among very large numbers of words (20k-60k) and very large numbers of natural text passages (lk-70k) in which they occurred. The result was 100-350 dimensional "semantic spaces" in which any trained or newly aibl word or passage could be represented as a vector, and similarities were measured by the cosine of the contained angle between vectors. Good accmacy in simulating human judgments and behaviors has been demonstrated by performance on multiple-choice vocabulary and domain knowledge tests, emulation of expert essay evaluations, and in several other ways. Examples are also given of how the kind of knowledge extracted by this method can be applied.
Classification by Pairwise Coupling
Hastie, Trevor, Tibshirani, Robert
We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in simulated datasets. The classifiers used include linear discriminants and nearest neighbors: application to support vector machines is also briefly described.