Education
Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors
LeCun, Yann, Simard, Patrice Y., Pearlmutter, Barak
We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive part of neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.
A Knowledge-Based Model of Geometry Learning
Towell, Geoffrey, Lehrer, Richard
We propose a model of the development of geometric reasoning in children that explicitly involves learning. The model uses a neural network that is initialized with an understanding of geometry similar to that of second-grade children. Through the presentation of a series of examples, the model is shown to develop an understanding of geometry similar to that of fifth-grade children who were trained using similar materials.
Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System
Walton, Lance C., Bisset, David L.
This paper examines and extends the work of Linsker (1986) on self organising feature detectors. Linsker concentrates on the visual processing system, but infers that the weak assumptions made will allow the model to be used in the processing of other sensory information. This claim is examined here, with special attention paid to the auditory system, where there is much lower connectivity and therefore more statistical variability. Online training is utilised, to obtain an idea of training times. These are then compared to the time available to prenatal mammals for the formation of feature sensitive cells. 1 INTRODUCTION Within the last thirty years, a great deal of research has been carried out in an attempt to understand the development of cells in the pathways between the sensory apparatus and the cortex in mammals. For example, theories for the development of feature detectors were forwarded by Nass and Cooper (1975), by Grossberg (1976) and more recently Obermayer et al (1990). Hubel and Wiesel (1961) established the existence of several different types of feature sensitive cell in the visual cortex of cats. Various subsequent experiments have 1007 1008 Walton and Bisset shown that a considerable amount of development takes place before birth (i.e.
Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors
LeCun, Yann, Simard, Patrice Y., Pearlmutter, Barak
We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive part of neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.
Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors
LeCun, Yann, Simard, Patrice Y., Pearlmutter, Barak
Inst., 19600 NW vonNeumann Dr, Beaverton, OR 97006 Abstract We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian),which does not require to even calculate the Hessian. Severalother applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive partof neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.
Goal-Driven Learning: Fundamental Issues: A Symposium Report
In his model, requirements needs, it must be able to represent is done unintentionally; a problem for filling system knowledge solver attempting to solve a gaps also direct explanation generation what these needs are. Ram proposed problem simply stores a trace of its by guiding retrieval and revision representations that include processing without attention to its of explanations during case-based the desired knowledge (possibly partially future relevance. However, Ng's previously explanation construction (Leake specified) and the reason that mentioned studies show that 1992). In the context of analogical the knowledge is sought. Leake for a different class of task, learning mapping, Thagard pointed out that focused on the representation of the goals have a strong effect on the goals, semantic constraints, and syntactic knowledge required to resolve anomalies learning performance of human constraints all affect analogical (which depends on a vocabulary learners. A future question is to identify mapping (Holyoak and Thagard 1989) of anomaly characterization structures the limits of goal-driven processing and the retrieval of potential analogs to describe the information in human learners.
The Difficulties of Learning Logic Programs with Cut
Bergadano, F., Gunetti, D., Trinchero, U.
As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.