Country
When will a Genetic Algorithm Outperform Hill Climbing
Mitchell, Melanie, Holland, John H., Forrest, Stephanie
HoUand Dept. of Psychology University of Michigan Ann Arbor, MI 48109 StephanieForrest Dept. of Computer Science University of New Mexico Albuquerque, NM 87131 Abstract We analyze a simple hill-climbing algorithm (RMHC) that was previously shownto outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. 1 INTRODUCTION Our goal is to understand the class of problems for which genetic algorithms (GA) are most suited, and in particular, for which they will outperform other search algorithms. Several studies have empirically compared GAs with other search and optimization methods such as simple hill-climbing (e.g., Davis, 1991), simulated annealing (e.g., Ingber & Rosen, 1992), linear, nonlinear, and integer programming techniques, and other traditional optimization techniques (e.g., De Jong, 1975). However, such comparisons typically compare one version of the GA with a second algorithm on a single problem or set of problems, often using performance criteria which may not be appropriate.
Optimal Signalling in Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
It is well known that a given cortical neuron can respond with a different firing pattern forthe same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated bythe history-dependent nature of neuronal firing, we continue .our
A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction
Das, Sreerupa, Mozer, Michael C.
Researchers often try to understand-post hoc-representations that emerge in the hidden layers of a neural net following training. Interpretation is difficult because these representations are typically highly distributed and continuous. By "continuous," wemean that if one constructed a scatterplot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space.
Comparison Training for a Rescheduling Problem in Neural Networks
Keymeulen, Didier, Gerlache, Martine de
Many events such as flight delays or the absence of a member require the crew pool rescheduling team to change the initial schedule (rescheduling). In this paper, we show that the neural network comparison paradigm applied to the backgammon game by Tesauro (Tesauro and Sejnowski, 1989)can also be applied to the rescheduling problem of an aircrew pool. Indeed both problems correspond to choosing the best solut.ion
Transition Point Dynamic Programming
Buckland, Kenneth M., Lawrence, Peter D.
Transition point dynamic programming (TPDP) is a memorybased, reinforcementlearning, direct dynamic programming approach toadaptive optimal control that can reduce the learning time and memory usage required for the control of continuous stochastic dynamic systems. TPDP does so by determining an ideal set of transition points (TPs) which specify only the control action changes necessary for optimal control. TPDP converges to an ideal TP set by using a variation of Q-Iearning to assess the merits ofadding, swapping and removing TPs from states throughout the state space. When applied to a race track problem, TPDP learned the optimal control policy much sooner than conventional Q-Iearning, and was able to do so using less memory. 1 INTRODUCTION Dynamic programming (DP) approaches can be utilized to determine optimal control policiesfor continuous stochastic dynamic systems when the state spaces of those systems have been quantized with a resolution suitable for control (Barto et al., 1991). DP controllers, in lheir simplest form, are memory-based controllers that operate by repeatedly updating cost values associated with every state in the discretized state space (Barto et al., 1991).
On the Non-Existence of a Universal Learning Algorithm for Recurrent Neural Networks
We prove that the so called "loading problem" for (recurrent) neural networks isunsolvable. This extends several results which already demonstrated thattraining and related design problems for neural networks are (at least) NPcomplete. Our result also implies that it is impossible to find or to formulate a universal training algorithm, which for any neural networkarchitecture could determine a correct set of weights. For the simple proof of this, we will just show that the loading problem is equivalent to "Hilbert's tenth problem" which is known to be unsolvable.
Complexity Issues in Neural Computation and Learning
Roychowdhury, V. P., Siu, K.-Y.
The general goal of this workshop was to bring t.ogether researchers working toward developing a theoretical framework for the analysis and design of neural networks. The t.echnical focus of the workshop was to address recent. The primary topics addressed the following three areas: 1) Computational complexityissues in neural networks, 2) Complexity issues in learning, and 3) Convergence and numerical properties of learning algorit.hms. Such st.udies, in t.urn, have generated considerable research interest. A similar development can be observed in t.he area of learning as well: Techniques primarily developed in the classical theory of learning are being applied to understand t.he generalization and learning characteristics of neural networks.
Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements
Thisimplies the existence of a set of orthogonal internal coordinates thatare related to meaningful coordinates of the external world. No coherent computational theory has yet been proposed to explain this finding. I have proposed a simple model which provides aframework for a theory of low-level motor learning. I show that the theory predicts the observed microstimulation results in the barn owl. The model rests on the concept of "Optimal Un supervised Motor Learning", which provides a set of criteria that predict optimal internal representations. I describe two iterative Neural Network algorithms which find the optimal solution and demonstrate possible mechanisms for the development of internal representations in animals. 1 INTRODUCTION In the sensory domain, many algorithms for unsupervised learning have been proposed. Thesealgorithms learn depending on statistical properties of the input data, and often can be used to find useful "intermediate" sensory representations 614 Bam Owl Head Movements 615
Robot Learning: Exploration and Continuous Domains
David A. Cohn MIT Dept. of Brain and Cognitive Sciences Cambridge, MA 02139 The goal of this workshop was to discuss two major issues: efficient exploration of a learner's state space, and learning in continuous domains. The common themes that emerged in presentations and in discussion were the importance of choosing one'sdomain assumptions carefully, mixing controllers/strategies, avoidance of catastrophic failure, new approaches with difficulties with reinforcement learning, and the importance of task transfer. He suggested that neither "fewer assumptions are better" nor "more assumptions are better" is a tenable position, and that we should strive to find and use standard sets of assumptions. With no such commonality, comparison of techniques and results is meaningless. Under Moore's guidance, the group discussed the possibility of designing an algorithm which used a number of well-chosen assumption sets and switched between them according to their empirical validity.