Country
Toward a Unified Approach for Conceptual Knowledge Acquisition
In keeping with a desire to abstract general principles in AI, this article begins to examine some relationships among heuristic learning in search, classification of utility, properties of certain structures, measurement of acquired knowledge, and efficiency of associated learning. In the process, a simple definition is given for conceptual knowledge, considered as information compression. The discussion concludes that domain-specific conceptual knowledge can be acquired. Among other implications of the analysis is that statistical observation of probabilities can result in the equivalent of planning, in low susceptibility to error, and in efficient learning.
The Nature of AI: A Reply to Schank
A fifth answer is also advanced, but is immediately withdrawn. The Innovative Answer: "It also usually means getting fact, there are enough opinions for four men. Roger Schanks, and disagree with the other three. As & hank points out, this is unsatisfactory because it leads Schank hoped that his article would start a debate on to a shifting definition of AI. the issues he raised. Another of these answers, the learning answer, can also What are Schank's four views? Anyone who attempts to clarify a In answer to his question "What is AI all about?", he vague term, like AI, is allowed a certain amount of license in claims to see only two possible answers. The Scientific Answer: "that AI is concerned with highlighting other uses, but there are limits to this license.
Research at The University of Texas
Research in artificial intelligence at the University of Texas at Austin is diverse. It is spread across many departments(Computer Science, Mathematics, the Institute for Computer Science and Computer Applications, and the Linguistics Research Center) and it covers most of the major subareas with AI (natural language, theorem proving, knowledge representation, languages for AI, and applications). Related work is also being done in several other departments, including EE (low-level vision), Psychology, Linguistics, and the Center for Cognitive Science.
Artificial Intelligence Prepares for 2001
Artificial Intelligence, as a maturing scientific/engineering discipline, is beginning to find its niche among the variety of subjects that are relevant to intelligent, perceptive behavior. A view of AI is presented that is based on a declarative representation of knowledge with semantic attachments to problem-specific procedures and data structures. Several important challenges to this view are briefly discussed. It is argued that research in the field would be stimulated by a project to develop a computer individual that would have a continuing existence in time.
Artificial Intelligence Needs More Emphasis on Basic Research: President's Quarterly Message
AI NEEDS MORE EMF'HASIS ON BASIC RESEARCH Too few people are doing basic research in AT rela-language processing seems misguided to me. There is too tive to the number working on applications The ratio of much emphasis on syntax and not enough on the semantics. This is unfortunate, between existing AI formalisms and English miss the point. Even the applied goals press in English what we already know how to express in proposed by various groups in the U.S., Europe and Japan computerese. Rather we must study those ideas expressible for the next ten years are not just engineering extrapolations in natural language that no-one knows how to represent at from the present state of science.
What Should Artificial Intelligence Want from the Supercomputers?
While some proposals for supercomputers increase the powers of existing machines like CDC and Cray supercomputers, others suggest radical changes of architecture to speed up non-traditional operations such as logical inference in PROLOG, recognition/ action in production systems, or message passing. We examine the case of parallel PROLOG to identify several related computations which subsume those of parallel PROLOG, but which have much wider interest, and which may have roughly the same difficulty of mechanization. Similar considerations apply to some other proposed architectures as well, raising the possibility that current efforts may be limiting their aims unnecessarily.
Research at Jet Propulsion Laboratory
AI research at JPL started in 1972 when design and construction of experimental "Mars Rover" began. Early in that effort, it was recognized that rover planning capabilities were inadequate. Research in planning was begun in 1975, and work on a succession of AI expert systems of steadily increasing power has continued to the present. Within the group, we have concentrated our efforts on expert systems, although work on vision and robotics has continued in a separate organizations, with which we have maintained informal contacts. The thrust of our work has been to build expert systems that can be applied in a real-world environment, and to actually put our systems into such environments, taking a consultative responsibility for meeting user requirements. Several supportive tools for AI are also being built. The current computational environment includes a large main-frame as well as high-performance personal LISP machines. A separate group has been engaged in the design of an intelligent work station with advanced graphic displays intended to interface with AI systems.
Letters to the Editor
Bierre, Pierre, Barutusta, Joreg
Pierre Bierre Project's proclaimed goals is one vitally important in Clairvoyant Systems a 1990's knowledge-intensive society.....the ability to help A decade from now, the nation will be crisscrossed with fiberoptic bundles capable of simultaneously carrying thousands of hiresolution video conversations, and solid-state video cameras will be as abundant as microphone pickup devices are today. Dear Editor: In short, the voice-telephone and printed-page information One of the sections I most look forward to in each new networks over which we communicate will be joined by 2-issue of the AI Magazine is the one entitled "Research in way, super-narrowcast video, where each knowledge worker Progress." I like to see informative overviews of the research both receives product from myriad sources ad reshapes and being conducted in different AI centers. I am sure there is some justification and teaching. Already, one can "walk through" for this concentration, but I am inclined to believe there are ' homes for sale thousands of miles away, learn how to assemble, other institutions that have, unfortunately, remained relatively operate and fix complex machinery, drive around What makes video I am concerned about this situation for one major reason.
Introduction to the COMTEX Microfiche Edition of Memos from the Stanford University Artificial Intelligence Laboratory
The Stanford Artificial Intelligence Project, later known as the Stanford AI Lab or SAIL, was created by Prof. John McCarthy shortly after his arrival at Stanford on 1962. As a faculty member in the Computer Science Division of the Mathematics Department, McCarthy began supervising research in artificial intelligence and timesharing systems with a few students. From this small start, McCarthy built a large and active research organization involving many other faculty and research projects as well as his own. There is no single theme to the SAIL memos. They cannot be easily categorized because they show a diversity of interests, resulting from the diversity of investigators and projects. Nevertheless, there are some important dimensions to the research that took place in the AI Lab that will try to put in historical context in this brief introduction.
GLISP: A Lisp-Based Programming System with Data Abstraction
GLISP is a high-level language that is complied into LISP. It provides a versatile abstract-data-type facility with hierarchical inheritance of properties and object-centered programming. GLISP programs are shorter and more readable than equivalent LISP programs. The object code produced by GLISP is optimized, making it about as efficient as handwritten Lisp. An integrated programming environment is provided, including automatic incremental compilation, interpretive programming features, and an intelligent display-based inspector/editor for data and data-type descriptions. GLISP code is relatively portable; the compiler and data inspector are implemented for most major dialects of LISP and are available free or at nominal cost.