Education
Knowledge-Based Environments for Teaching and Learning
Woolf, Bevery Park, Soloway, Elliot, Clancey, William J., Lehn, Kurt Van, Suthers, Dan
The Spring Symposium on Knowledge-based Environments for Teaching and Learning focused on the use of technology to facilitate learning, training, teaching, counseling, coaxing and coaching. Sixty participants from academia and industry assessed progress made to date and speculated on new tools for building second generation systems.
Second International Workshop on User Modeling
The Second International Workshop on User Modeling was held March 30- April 1, 1990 in Honolulu, Hawaii. The general chairperson was Dr. Wolfgang Wahlster of the University of Saarbrucken; the program and local arrangements chairperson was Dr. David Chin of the University of Hawaii at Manoa. The workshop was sponsored by AAAI and the University of Hawaii, with AAAI providing eight travel stipends for students.
Critiquing Human Judgment Using Knowledge-Acquisition Systems
Automated knowledge-acquisition systems have focused on embedding a cognitive model of a key knowledge worker in their software that allows the system to acquire a knowledge base by interviewing domain experts just as the knowledge worker would. Two sets of research questions arise: (1) What theories, strategies, and approaches will let the modeling process be facilitated; accelerated; and, possibly, automated? If automated knowledge-acquisition systems reduce the bottleneck associated with acquiring knowledge bases, how can the bottleneck of building the automated knowledge-acquisition system itself be broken? (2) If the automated knowledge-acquisition system centers on having an effective cognitive model of the key knowledge worker(s), to what extent does this model account for and attempt to influence human bias in knowledge base rule generation? That is, humans are known to be subject to errors and cognitive biases in their judgment processes. How can an automated system critique and influence such biases in a positive fashion, what common patterns exist across applications, and can models of influencing behavior be described and standardized? This article answers these research questions by presenting several prototypical scenes depicting bias and debiasing strategies.
Review of Representation and Reality
Part of the Media Laboratory's Steve Benton on an advanced beammixing information. Like Richard Feynman's heritage (its origins are in the television display), (4) movies two books of memoirs and School of Architecture) is a startling of the future (putting feature-length Gleick's Chaos, this book will be receptivity to the arts, especially movies on laser disks, thereby ushering passed among workers in computer music and the visual arts, and Brand in paperback movies), (5) the visible and engineering departments as a repeatedly returns to this subject.
Review of The Media Lab
Stewart Brand, of Whole Earth Catalog fame, is a technology enthusiast. In 1986, he spent three months in the fantasyland of his choice, MIT's Media Laboratory (formerly the Architecture Machine Group). In his latest book, The Media Lab: Inventing the Future at MIT (Viking/ Penguin, New York, 1988, 285 pp., $10, ISBN 0-14-009701-5), he tells the world what he found.
Review of Artificial Intelligence: A Knowledge-Based Approach
To be considered exceptional, a textbook must satisfy three basic requirements. First, it must be authoritative, written by one with a broad range of experience in, and knowledge of, a subject. Second, it must effectively communicate to the reader, in the same manner in which a course instructor must be capable of imparting knowledge to students in a classroom. Third, it must stimulate the reader into thinking more deeply about the subject and into viewing it from fresh perspectives. In Artificial Intelligence: A Knowledge-Based Approach (Boyd and Fraser, Boston, 740 pp., $48.95), author Morris W. Firebaugh has succeeded in meeting each of these requirements.
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable.
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.