Not enough data to create a plot.
Try a different view from the menu above.
Education
The Gardens of Learning: A Vision for AI
The field of AI is directed at the fundamental problem of how the mind works; its approach, among other things, is to try to simulate its working -- in bits and pieces. History shows us that mankind has been trying to do this for certainly hundreds of years, but the blooming of current computer technology has sparked an explosion in the research we can now do. The center of AI is the wonderful capacity we call learning, which the field is paying increasing attention to. Learning is difficult and easy, complicated and simple, and most research doesn't look at many aspects of its complexity. However, we in the AI field are starting. Let us now celebrate the efforts of our forebears and rejoice in our own efforts, so that our successors can thrive in their research. This article is the substance, edited and adapted, of the keynote address given at the 1992 annual meeting of the Association for the Advancement of Artificial Intelligence on 14 July in San Jose, California.
1992 AAAI Robot Exhibition and Competition
Dean, Thomas, Bonasso, R. Peter
The first Robotics Exhibition and Competition sponsored by the Association for the Advancement of Artificial Intelligence was held in San Jose, California, on 14-16 July 1992 in conjunction with the Tenth National Conference on AI. This article describes the history behind the competition, the preparations leading to the competition, the threedays during which 12 teams competed in the three events making up the competition, and the prospects for other such competitions in the future.
Learning Problem-Solving Heuristics by Experimentation
Mitchell, T.M. | Utgoff, P.E. | Banerji, R.B.
Machine Learning: An Artificial Intelligence Approach contains tutorial overviews and research papers representative of trends in the area of machine learning as viewed from an artificial intelligence perspective. The book is organized into six parts. Part I provides an overview of machine learning and explains why machines should learn. Part II covers important issues affecting the design of learning programs-particularly programs that learn from examples. It also describes inductive learning systems.
The Gardens of Learning: A Vision for AI
The field of AI is directed at the fundamental problem of how the mind works; its approach, among other things, is to try to simulate its working -- in bits and pieces. History shows us that mankind has been trying to do this for certainly hundreds of years, but the blooming of current computer technology has sparked an explosion in the research we can now do. The center of AI is the wonderful capacity we call learning, which the field is paying increasing attention to. Learning is difficult and easy, complicated and simple, and most research doesn't look at many aspects of its complexity. However, we in the AI field are starting. Let us now celebrate the efforts of our forebears and rejoice in our own efforts, so that our successors can thrive in their research. This article is the substance, edited and adapted, of the keynote address given at the 1992 annual meeting of the Association for the Advancement of Artificial Intelligence on 14 July in San Jose, California. AI Magazine 14(2): 36-48.
Neural Network Perception for Mobile Robot Guidance
Vision based mobile robot guidance has proven difficult for classical machine vision methods because of the diversity and real time constraints inherent in the task. This thesis describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task. But real world problems like vision based mobile robot guidance presents a different set of challenges for the connectionist paradigm.
Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks
Sun, Guo-Zheng, Chen, Hsing-Hen, Lee, Yee-Chun
The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williamsand Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the online requirement in many practical applications. Although the forward propagation algorithmcan be used in an online manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix(0(fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Wettschereck, Dietrich, Dietterich, Thomas
Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resulted in inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.