Goto

Collaborating Authors


Problem solving techniques for the design of algorithms

Classics

"By studying the problem-solving techniques that people use to design algorithms we can learn something about building systems that automatically derive algorithms or assist human designers. In this paper we present a model of algorithm design based on our analysis of the protocols of two subjects designing three convex hull algorithms. The subjects work mainly in a data-flow problem space in which the objects are representations of partially specified algorithms. A small number of general-purpose operators construct and modify the representations; these operators are adapted to the current problem state by means-ends analysis. The problem space also includes knowledge-rich schemas such as divide and conquer that subjects incorporate into their algorithms. A particularly versatile problem-solving method in this problem space is symbolic execution, which can be used to refine, verify, or explain components of an algorithm. The subjects also work in a task-domain space about geometry. The interplay between problem solving in the two spaces makes possible the process of discovery. We have observed that the time a subject takes to design an algorithm is proportional to the number of components in the algorithm's data-flow representation. Finally, the details of the problem spaces provide a model for building a robust automated system." Information Processing and Management 20(l-2):97-118



Research at The University of Texas

AI Magazine

Research in artificial intelligence at the University of Texas at Austin is diverse. It is spread across many departments(Computer Science, Mathematics, the Institute for Computer Science and Computer Applications, and the Linguistics Research Center) and it covers most of the major subareas with AI (natural language, theorem proving, knowledge representation, languages for AI, and applications). Related work is also being done in several other departments, including EE (low-level vision), Psychology, Linguistics, and the Center for Cognitive Science.


Introduction to the COMTEX Microfiche Edition of Memos from the Stanford University Artificial Intelligence Laboratory

AI Magazine

The Stanford Artificial Intelligence Project, later known as the Stanford AI Lab or SAIL, was created by Prof. John McCarthy shortly after his arrival at Stanford on 1962. As a faculty member in the Computer Science Division of the Mathematics Department, McCarthy began supervising research in artificial intelligence and timesharing systems with a few students. From this small start, McCarthy built a large and active research organization involving many other faculty and research projects as well as his own. Nevertheless, there are some important dimensions to the research that took place in the AI Lab that will try to put in historical context in this brief introduction.


Artificial Intelligence Needs More Emphasis on Basic Research: President's Quarterly Message

AI Magazine

Too few people are doing basic research in AI relative to the number working on applications. The ratio of basic/applied is less in AI than in the older sciences and than in computer science generally. This is unfortunate, because reaching human level artificial intelligence will require fundamental conceptual advances.


Toward a Unified Approach for Conceptual Knowledge Acquisition

AI Magazine

In keeping with a desire to abstract general principles in AI, this article begins to examine some relationships among heuristic learning in search, classification of utility, properties of certain structures, measurement of acquired knowledge, and efficiency of associated learning. In the process, a simple definition is given for conceptual knowledge, considered as information compression. The discussion concludes that domain-specific conceptual knowledge can be acquired. Among other implications of the analysis is that statistical observation of probabilities can result in the equivalent of planning, in low susceptibility to error, and in efficient learning.


Artificial Intelligence Prepares for 2001

AI Magazine

Artificial Intelligence, as a maturing scientific/engineering discipline, is beginning to find its niche among the variety of subjects that are relevant to intelligent, perceptive behavior. A view of AI is presented that is based on a declarative representation of knowledge with semantic attachments to problem-specific procedures and data structures. Several important challenges to this view are briefly discussed. It is argued that research in the field would be stimulated by a project to develop a computer individual that would have a continuing existence in time.


What Should Artificial Intelligence Want from the Supercomputers?

AI Magazine

While some proposals for supercomputers increase the powers of existing machines like CDC and Cray supercomputers, others suggest radical changes of architecture to speed up non-traditional operations such as logical inference in PROLOG, recognition/ action in production systems, or message passing. We examine the case of parallel PROLOG to identify several related computations which subsume those of parallel PROLOG, but which have much wider interest, and which may have roughly the same difficulty of mechanization. Similar considerations apply to some other proposed architectures as well, raising the possibility that current efforts may be limiting their aims unnecessarily.


Research at Jet Propulsion Laboratory

AI Magazine

AI research at JPL started in 1972 when design and construction of experimental "Mars Rover" began. Early in that effort, it was recognized that rover planning capabilities were inadequate. Research in planning was begun in 1975, and work on a succession of AI expert systems of steadily increasing power has continued to the present. Within the group, we have concentrated our efforts on expert systems, although work on vision and robotics has continued in a separate organizations, with which we have maintained informal contacts.