Information Technology
Remote Sensing Image Analysis via a Texture Classification Neural Network
Greenspan, Hayit K., Goodman, Rodney
In this work we apply a texture classification network to remote sensing image analysis. The goal is to extract the characteristics of the area depicted in the input image, thus achieving a segmented map of the region. We have recently proposed a combined neural network and rule-based framework for texture recognition. The framework uses unsupervised and supervised learning, and provides probability estimates for the output classes. We describe the texture classification network and extend it to demonstrate its application to the Landsat and Aerial image analysis domain. 1 INTRODUCTION In this work we apply a texture classification network to remote sensing image analysis. The goal is to segment the input image into homogeneous textured regions and identify each region as one of a prelearned library of textures, e.g.
How Oscillatory Neuronal Responses Reflect Bistability and Switching of the Hidden Assembly Dynamics
Pawelzik, K., Bauer, H.-U., Deppisch, J., Geisel, T.
A switching between apparently coherent (oscillatory) and stochastic episodes of activity has been observed in responses from cat and monkey visual cortex. We describe the dynamics of these phenomena in two parallel approaches, a phenomenological and a rather microscopic one. On the one hand we analyze neuronal responses in terms of a hidden state model (HSM). The parameters of this model are extracted directly from experimental spike trains. They characterize the underlying dynamics as well as the coupling of individual neurons to the network. This phenomenological model thus provides a new framework for the experimental analysis of network dynamics.
A Knowledge-Based Model of Geometry Learning
Towell, Geoffrey, Lehrer, Richard
We propose a model of the development of geometric reasoning in children that explicitly involves learning. The model uses a neural network that is initialized with an understanding of geometry similar to that of second-grade children. Through the presentation of a series of examples, the model is shown to develop an understanding of geometry similar to that of fifth-grade children who were trained using similar materials.
Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times
Orr, Genevieve B., Leen, Todd K.
In stochastic learning, weights are random variables whose time evolution is governed by a Markov process. We summarize the theory of the time evolution of P, and give graphical examples of the time evolution that contrast the behavior of stochastic learning with true gradient descent (batch learning). Finally, we use the formalism to obtain predictions of the time required for noise-induced hopping between basins of different optima. We compare the theoretical predictions with simulations of large ensembles of networks for simple problems in supervised and unsupervised learning. Despite the recent application of convergence theorems from stochastic approximation theory to neural network learning (Oja 1982, White 1989) there remain outstanding questions about the search dynamics in stochastic learning.
An Object-Oriented Framework for the Simulation of Neural Nets
Linden, A., Sudbrak, Th., Tietz, Ch., Weber, F.
The field of software simulators for neural networks has been expanding very rapidly in the last years but their importance is still being underestimated. They must provide increasing levels of assistance for the design, simulation and analysis of neural networks. With our object-oriented framework (SESAME) we intend to show that very high degrees of transparency, manageability and flexibility for complex experiments can be obtained. SESAME's basic design philosophy is inspired by the natural way in which researchers explain their computational models. Experiments are performed with networks of building blocks, which can be extended very easily.
Using Aperiodic Reinforcement for Directed Self-Organization During Development
Montague, P. R., Dayan, P., Nowlan, S.J., Pouget, A, Sejnowski, T.J.
We present a local learning rule in which Hebbian learning is conditional on an incorrect prediction of a reinforcement signal. We propose a biological interpretation of such a framework and display its utility through examples in which the reinforcement signal is cast as the delivery of a neuromodulator to its target. Three exam pIes are presented which illustrate how this framework can be applied to the development of the oculomotor system. 1 INTRODUCTION Activity-dependent accounts of the self-organization of the vertebrate brain have relied ubiquitously on correlational (mainly Hebbian) rules to drive synaptic learning. In the brain, a major problem for any such unsupervised rule is that many different kinds of correlations exist at approximately the same time scales and each is effectively noise to the next. For example, relationships within and between the retinae among variables such as color, motion, and topography may mask one another and disrupt their appropriate segregation at the level of the thalamus or cortex.
Self-Organizing Rules for Robust Principal Component Analysis
Principal Component Analysis (PCA) is an essential technique for data compression and feature extraction, and has been widely used in statistical data analysis, communication theory, pattern recognition and image processing. In the neural network literature, a lot of studies have been made on learning rules for implementing PCA or on networks closely related to PCA (see Xu & Yuille, 1993 for a detailed reference list which contains more than 30 papers related to these issues).
Using Prior Knowledge in a NNPDA to Learn Context-Free Languages
Das, Sreerupa, Giles, C. Lee, Sun, Guo-Zheng
Language inference and automata induction using recurrent neural networks has gained considerable interest in the recent years. Nevertheless, success of these models has been mostly limited to regular languages. Additional information in form of a priori knowledge has proved important and at times necessary for learning complex languages (Abu-Mostafa 1990; AI-Mashouq and Reed, 1991; Omlin and Giles, 1992; Towell, 1990). They have demonstrated that partial information incorporated in a connectionist model guides the learning process through constraints for efficient learning and better generalization. 'Ve have previously shown that the NNPDA model can learn Deterministic Context 65 66 Das, Giles, and Sun
Learning Curves, Model Selection and Complexity of Neural Networks
Murata, Noboru, Yoshizawa, Shuji, Amari, Shun-ichi
Learning curves show how a neural network is improved as the number of t.raiuing examples increases and how it is related to the network complexity. The present paper clarifies asymptotic properties and their relation of t.wo learning curves, one concerning the predictive loss or generalization loss and the other the training loss. The result gives a natural definition of the complexity of a neural network. Moreover, it provides a new criterion of model selection.