Industry
Bayesian Networks without Tears.
I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Over the last few years, this method of reasoning using probabilities has become popular within the AI probability and uncertainty community. Indeed, it is probably fair to say that Bayesian networks are to a large segment of the AI-uncertainty community what resolution theorem proving is to the AIlogic community. Nevertheless, despite what seems to be their obvious importance, the ideas and techniques have not spread much beyond the research community responsible for them. This is probably because the ideas and techniques are not that easy to understand. I hope to rectify this situation by making Bayesian networks more accessible to the probabilistically unsophisticated.
AAAI 1991 Spring Symposium Series Reports
The Association for the Advancement of Artificial Intelligence held its 1991 Spring Symposium Series on March 26-28 at Stanford University, Stanford, California. This article contains short summaries of the eight symposia that were conducted: Argumentation and Belief, Composite System Design, Connectionist Natural Language Processing, Constraint-Based Reasoning, Implemented Knowledge Representation and Reasoning Systems, Integrated Intelligent Architectures, Logical Formalizations of Commonsense Reasoning, and Machine Learning of Natural Language and Ontology.
Where's the AI?
I survey four viewpoints about what AI is. I describe a program exhibiting AI as one that can change as a result of interactions with the user. Such a program would have to process hundreds or thousands of examples as opposed to a handful. Because AI is a machine's attempt to explain the behavior of the (human) system it is trying to model, the ability of a program design to scale up is critical. Researchers need to face the complexities of scaling up to programs that actually serve a purpose. The move from toy domains into concrete ones has three big consequences for the development of AI. First, it will force software designers to face the idiosyncrasies of its users. Second, it will act as an important reality check between the language of the machine, the software, and the user. Third, the scaled-up programs will become templates for future work. For a variety of reasons, some of which I discuss one of the following four things: (1) AI means in this article, the newly formed Institute magic bullets, (2) AI means inference engines, for the Learning Sciences has been concentrating (3) AI means getting a machine to do something its efforts on building high-quality you didn't think a machine could do educational software for use in business and (the "gee whiz" view), and (4) AI means elementary and secondary schools. In the two having a machine learn.
Principles of Diagnosis: Current Trends and a Report on the First International Workshop
Automated diagnosis is an important AI problem not only for its potential practical applications but also because it exposes issues common to all automated reasoning efforts and presents real challenges to existing paradigms. Current research in this area addresses many problems, including managing and structuring probabilistic information, modeling physical systems, reasoning with defeasible assumptions, and interleaving deliberation and action. Furthermore, diagnosis programs must face these problems in contexts where scaling up to deal with cases of realistic size results in daunting combinatorics. This article presents these and other issues as discussed at the First International Workshop on Principles of Diagnosis.
Knowledge Interchange Format: the KIF of Death
There has been a good deal of discussion recently about the possibility of standardizing knowledge representation efforts, including the development of an interlingua, or knowledge interchange format (KIF), that would allow developers of declarative knowledge to share their results with other AI researchers. In this article, I examine the practicality of this idea. I present some philosophical arguments against it, describe a straw-man KIF, and suggest specific experiments that would help explore these issues.
Domain-Based Program Synthesis Using Planning and Derivational Analogy
In my Ph.D. dissertation (Bhansali 1991), I develop an integrated knowledge-based framework for efficiently synthesizing programs by bringing together ideas from the fields of software engineering (software reuse, domain modeling) and AI (hierarchical planning, analogical reasoning). Based on this framework, I constructed a prototype system, APU, that can synthesize UNIX shell scripts from a high-level specification of problems typically encountered by novice shell programmers. An empirical evaluation of the system's performance points to certain criteria that determine the feasibility of the derivational analogy approach in the automatic programming domain when the cost of detecting analogies and recovering from wrong analogs is considered.
A Task-Specific Problem-Solving Architecture for Candidate Evaluation
This article describes a task-specific, domain-independent architecture for candidate evaluation. I discuss the task-specific architecture approach to knowledge-based system development. Finally, I describe a task-specific expert system shell, which includes a development environment (Ceved) and a run-time consultation environment (Ceval). This shell enables nonprogramming domain experts to easily encode and represent evaluation-type knowledge and incorporates the encoded knowledge in performance systems.
Enabling Technology for Knowledge Sharing
Neches, Robert, Fikes, Richard E., Finin, Tim, Gruber, Thomas, Patil, Ramesh, Senator, Ted, Swartout, William R.
Building new knowledge-based systems today usually entails constructing new knowledge bases from scratch. It could instead be done by assembling reusable components. System developers would then only need to worry about creating the specialized knowledge and reasoners new to the specific task of their system. This new system would interoperate with existing systems, using them to perform some of its reasoning. In this way, declarative knowledge, problem- solving techniques, and reasoning services could all be shared among systems. This approach would facilitate building bigger and better systems cheaply. The infrastructure to support such sharing and reuse would lead to greater ubiquity of these systems, potentially transforming the knowledge industry. This article presents a vision of the future in which knowledge-based system development and operation is facilitated by infrastructure and technology for knowledge sharing. It describes an initiative currently under way to develop these ideas and suggests steps that must be taken in the future to try to realize this vision.
A Performance Evaluation of Text-Analysis Technologies
Lehnert, Wendy, Sundheim, Beth
A performance evaluation of 15 text-analysis systems was recently conducted to realistically assess the state of the art for detailed information extraction from unconstrained continuous text. Reports associated with terrorism were chosen as the target domain, and all systems were tested on a collection of previously unseen texts released by a government agency. Based on multiple strategies for computing each metric, the competing systems were evaluated for recall, precision, and overgeneration. The results support the claim that systems incorporating natural language-processing techniques are more effective than systems based on stochastic techniques alone. A wide range of language-processing strategies was employed by the top-scoring systems, indicating that many natural language-processing techniques provide a viable foundation for sophisticated text analysis. Further evaluation is needed to produce a more detailed assessment of the relative merits of specific technologies and establish true performance limits for automated information extraction.
Deterministic Autonomous Systems
Covrigaru, Arie A., Lindsay, Robert K.
This article argues that autonomy, not problem-solving prowess, is the key property that defines the intuitive notion of "intelligent creature." To build an intelligent artificial entity that will act autonomously, we must first understand the attributes of a system that lead us to call it autonomous. The presence of these attributes gives autonomous systems the appearance of nondeterminism, but they can all be present in deterministic artifacts and living systems. We argue that autonomy means having the right kinds of goals and the ability to select goals from an existing set, not necessarily creating new goals. We analyze the concept of goals in problem-solving systems in general and establish criteria for the types of goals that characterize autonomy.