Education
Critiquing Human Judgment Using Knowledge-Acquisition Systems
Automated knowledge-acquisition systems have focused on embedding a cognitive model of a key knowledge worker in their software that allows the system to acquire a knowledge base by interviewing domain experts just as the knowledge worker would. Two sets of research questions arise: (1) What theories, strategies, and approaches will let the modeling process be facilitated; accelerated; and, possibly, automated? If automated knowledge-acquisition systems reduce the bottleneck associated with acquiring knowledge bases, how can the bottleneck of building the automated knowledge-acquisition system itself be broken? (2) If the automated knowledge-acquisition system centers on having an effective cognitive model of the key knowledge worker(s), to what extent does this model account for and attempt to influence human bias in knowledge base rule generation? That is, humans are known to be subject to errors and cognitive biases in their judgment processes. How can an automated system critique and influence such biases in a positive fashion, what common patterns exist across applications, and can models of influencing behavior be described and standardized? This article answers these research questions by presenting several prototypical scenes depicting bias and debiasing strategies.
Review of Representation and Reality
Part of the Media Laboratory's Steve Benton on an advanced beammixing information. Like Richard Feynman's heritage (its origins are in the television display), (4) movies two books of memoirs and School of Architecture) is a startling of the future (putting feature-length Gleick's Chaos, this book will be receptivity to the arts, especially movies on laser disks, thereby ushering passed among workers in computer music and the visual arts, and Brand in paperback movies), (5) the visible and engineering departments as a repeatedly returns to this subject.
Review of The Media Lab
Stewart Brand, of Whole Earth Catalog fame, is a technology enthusiast. In 1986, he spent three months in the fantasyland of his choice, MIT's Media Laboratory (formerly the Architecture Machine Group). In his latest book, The Media Lab: Inventing the Future at MIT (Viking/ Penguin, New York, 1988, 285 pp., $10, ISBN 0-14-009701-5), he tells the world what he found.
Review of Artificial Intelligence: A Knowledge-Based Approach
To be considered exceptional, a textbook must satisfy three basic requirements. First, it must be authoritative, written by one with a broad range of experience in, and knowledge of, a subject. Second, it must effectively communicate to the reader, in the same manner in which a course instructor must be capable of imparting knowledge to students in a classroom. Third, it must stimulate the reader into thinking more deeply about the subject and into viewing it from fresh perspectives. In Artificial Intelligence: A Knowledge-Based Approach (Boyd and Fraser, Boston, 740 pp., $48.95), author Morris W. Firebaugh has succeeded in meeting each of these requirements.
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable.
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.
The Power of Physical Representations
Akman, Varol, Hagen, Paul J. W. ten
Commonsense reasoning about the physical world, as exemplified by "Iron sinks in water" or "If a ball is dropped it gains speed," will be indispensable in future programs. We argue that to make such predictions (namely, envisioning), programs should use abstract entities (such as the gravitational field), principles (such as the principle of superposition), and laws (such as the conservation of energy) of physics for representation and reasoning. These arguments are in accord with a recent study in physics instruction where expert problem solving is related to the construction of physical representations that contain fictitious, imagined entities such as forces and momenta (Larkin 1983). We give several examples showing the power of physical representations.
An Investigation of AI and Expert Systems Literature: 1980-1984
This article records the results of an experiment in which a survey of AI and expert systems (ES) literature was attempted using Science Citation Indexes. The survey identified a sample of authors and institutions that have had a significant impact on the historical development of AI and ES. However, it also identified several glaring problems with using Science Citation Indexes as a method of comprehensively studying a body of scientific research. Accordingly, the reader is cautioned against using the results presented here to conclude that author A is a better or worse AI researcher than author B.