Not enough data to create a plot.
Try a different view from the menu above.
Country
Interposing an ontogenetic model between Genetic Algorithms and Neural Networks
The relationships between learning, development and evolution in Nature is taken seriously, to suggest a model of the developmental process whereby the genotypes manipulated by the Genetic Algorithm (GA) might be expressed to form phenotypic neural networks (NNet) that then go on to learn. ONTOL is a grammar for generating polynomial NN ets for time-series prediction. Genomes correspond to an ordered sequence of ONTOL productions and define a grammar that is expressed to generate a NNet. The NNet's weights are then modified by learning, and the individual's prediction error is used to determine GA fitness. A new gene doubling operator appears critical to the formation of new genetic alternatives in the preliminary but encouraging results presented.
Reinforcement Learning Applied to Linear Quadratic Regulation
Recent research on reinforcement learning has focused on algorithms based on the principles of Dynamic Programming (DP). One of the most promising areas of application for these algorithms is the control of dynamical systems, and some impressive results have been achieved. However, there are significant gaps between practice and theory. In particular, there are no con vergence proofs for problems with continuous state and action spaces, or for systems involving nonlinear function approximators (such as multilayer perceptrons). This paper presents research applying DPbased reinforcement learning theory to Linear Quadratic Regulation (LQR), an important class of control problems involving continuous state and action spaces and requiring a simple type of nonlinear function approximator. We describe an algorithm based on Q-Iearning that is proven to converge to the optimal controller for a large class of LQR problems. We also describe a slightly different algorithm that is only locally convergent to the optimal Q-function, demonstrating one of the possible pitfalls of using a nonlinear function approximator with DPbased learning.
A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytic Results
Linster, Christiane, Marsan, David, Masson, Claudine, Kerszberg, Michel, Dreyfus, Gérard, Personnaz, Léon
It is known from biological data that the response patterns of interneurons in the olfactory macroglomerulus (MGC) of insects are of central importance for the coding of the olfactory signal. We propose an analytically tractable model of the MGC which allows us to relate the distribution of response patterns to the architecture of the network.
Improving Convergence in Hierarchical Matching Networks for Object Recognition
We are interested in the use of analog neural networks for recognizing visual objects. Objects are described by the set of parts they are composed of and their structural relationship. Structural models are stored in a database and the recognition problem reduces to matching data to models in a structurally consistent way. The object recognition problem is in general very difficult in that it involves coupled problems of grouping, segmentation and matching. We limit the problem here to the simultaneous labelling of the parts of a single object and the determination of analog parameters. This coupled problem reduces to a weighted match problem in which an optimizing neural network must minimize E(M, p) LO'i MO'i WO'i(p), where the {MO'd are binary match variables for data parts i to model parts a and {Wai(P)} are weights dependent on parameters p.
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization
A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.
Learning Sequential Tasks by Incrementally Adding Higher Orders
An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higherorder connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks.
A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition
Zavaliagkos, G., Zhao, Y., Schwartz, R., Makhoul, J.
Untill recently, state-of-the-art, large-vocabulary, continuous speech recognition (CSR) has employed Hidden Markov Modeling (HMM) to model speech sounds. In an attempt to improve over HMM we developed a hybrid system that integrates HMM technology with neural networks. We present the concept of a "Segmental Neural Net" (SNN) for phonetic modeling in CSR. By taking into account all the frames of a phonetic segment simultaneously, the SNN overcomes the well-known conditional-independence limitation of HMMs. In several speaker-independent experiments with the DARPA Resource Management corpus, the hybrid system showed a consistent improvement in performance over the baseline HMM system. 1 INTRODUCTION The current state of the art in continuous speech recognition (CSR) is based on the use of hidden Markov models (HMM) to model phonemes in context.
Integration of Visual and Somatosensory Information for Preshaping Hand in Grasping Movements
Uno, Yoji, Fukumura, Naohiro, Suzuki, Ryoji, Kawato, Mitsuo
The primate brain must solve two important problems in grasping movements. The first problem concerns the recognition of grasped objects: specifically, how does the brain integrate visual and motor information on a grasped object? The second problem concerns hand shape planning: specifically, how does the brain design the hand configuration suited to the shape of the object and the manipulation task? A neural network model that solves these problems has been developed. The operations of the network are divided into a learning phase and an optimization phase. In the learning phase, internal representations, which depend on the grasped objects and the task, are acquired by integrating visual and somatosensory information. In the optimization phase, the most suitable hand shape for grasping an object is determined by using a relaxation computation of the network.