Goto

Collaborating Authors

 Education


Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System

Neural Information Processing Systems

This paper examines and extends the work of Linsker (1986) on self organising feature detectors. Linsker concentrates on the visual processing system, but infers that the weak assumptions made will allow the model to be used in the processing of other sensory information. This claim is examined here, with special attention paid to the auditory system, where there is much lower connectivity and therefore more statistical variability. Online training is utilised, to obtain an idea of training times. These are then compared to the time available to prenatal mammals for the formation of feature sensitive cells. 1 INTRODUCTION Within the last thirty years, a great deal of research has been carried out in an attempt to understand the development of cells in the pathways between the sensory apparatus and the cortex in mammals. For example, theories for the development of feature detectors were forwarded by Nass and Cooper (1975), by Grossberg (1976) and more recently Obermayer et al (1990). Hubel and Wiesel (1961) established the existence of several different types of feature sensitive cell in the visual cortex of cats. Various subsequent experiments have 1007 1008 Walton and Bisset shown that a considerable amount of development takes place before birth (i.e.


Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors

Neural Information Processing Systems

We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive part of neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.



Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors

Neural Information Processing Systems

Inst., 19600 NW vonNeumann Dr, Beaverton, OR 97006 Abstract We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian),which does not require to even calculate the Hessian. Severalother applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive partof neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.


Goal-Driven Learning: Fundamental Issues: A Symposium Report

AI Magazine

In his model, requirements needs, it must be able to represent is done unintentionally; a problem for filling system knowledge solver attempting to solve a gaps also direct explanation generation what these needs are. Ram proposed problem simply stores a trace of its by guiding retrieval and revision representations that include processing without attention to its of explanations during case-based the desired knowledge (possibly partially future relevance. However, Ng's previously explanation construction (Leake specified) and the reason that mentioned studies show that 1992). In the context of analogical the knowledge is sought. Leake for a different class of task, learning mapping, Thagard pointed out that focused on the representation of the goals have a strong effect on the goals, semantic constraints, and syntactic knowledge required to resolve anomalies learning performance of human constraints all affect analogical (which depends on a vocabulary learners. A future question is to identify mapping (Holyoak and Thagard 1989) of anomaly characterization structures the limits of goal-driven processing and the retrieval of potential analogs to describe the information in human learners.


AAAI News

AI Magazine

They government contacts indicated the Bonnie Dorr has volunteered to serve also hope to increase the submissions importance of forming a mission as symposium cochair.


The Difficulties of Learning Logic Programs with Cut

Journal of Artificial Intelligence Research

As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.


Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

AI Magazine

This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively. Many areas of previously unexamined overlap were identified. An agenda of research needs was also developed.



The Gardens of Learning: A Vision for AI

AI Magazine

The field of AI is directed at the fundamental problem of how the mind works; its approach, among other things, is to try to simulate its working -- in bits and pieces. History shows us that mankind has been trying to do this for certainly hundreds of years, but the blooming of current computer technology has sparked an explosion in the research we can now do. The center of AI is the wonderful capacity we call learning, which the field is paying increasing attention to. Learning is difficult and easy, complicated and simple, and most research doesn't look at many aspects of its complexity. However, we in the AI field are starting. Let us now celebrate the efforts of our forebears and rejoice in our own efforts, so that our successors can thrive in their research. This article is the substance, edited and adapted, of the keynote address given at the 1992 annual meeting of the Association for the Advancement of Artificial Intelligence on 14 July in San Jose, California.