The Electrical Systems Division at the NASA Kennedy Space Center has developed and deployed an agent-based tool to monitor the space shuttle's ground processing telemetry stream. The agent provides autonomous monitoring of the telemetry stream and automatically alerts system engineers when predefined criteria have been met. Efficiency and safety are improved through increased automation.
The Seventh International Workshop on Qualitative Reasoning about Physical Systems was held on 16-19 May 1993. The bulk of the 50 attendees work in the AI area, but several engineers and cognitive psychologists also attended. The two topics attracting special attention were automated modeling and the design task. This article briefly describes some of the presentations and discussions held during the workshop. To promote deep and focused discussion, participation was limited to 50 researchers; the bulk of attendees work in the area of AI, but several engineers and cognitive psychologists enriched the atmosphere.
Robots in the Robot Host competition, part of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002) Mobile Robot Competition faced two challenges: (1) a serving task that was similar to the Hors d'Oeuvres, Anyone? Both tasks required moving carefully among people, politely offering them information or hors d'oeuvres, recognizing when the people are making a request, and answering the request. Both tasks required moving carefully among people, politely offering them information or hors d'oeuvres, recognizing when the people are making a request, and answering the request. Celebrating the sixth year for the Robot Host competition, a new task, the robot information kiosk, was added. Three entries took on the challenge of creating host robots who can both offer hors d'oeuvres to attendees of the robot exhibition and can serve as a source of information to attendees during breaks in the program.
The Brain Makers: Genius, Ego, and Greed in the Quest for Machines That Think, Harvey P. Newquist, Sams Publishing, Indianapolis, Indiana, 1994, 488 pp., $24.95, ISBN 0-672-30412-0. Newquist is a business reporter who covered the field during the 1980s when academic researchers went commercial in one of the 1980's smaller speculative bubbles. His book begins with a history spanning Babbage to Turing to Minsky, McCarthy, Newell, Simon, Samuel, and others at the 1956 Dartmouth meeting and moves on to the 1980s, where the real story begins. Good, if glib, descriptions of people, places, and events are punctuated by technical explanations ranging from poor to inane. Because I am a little slow, it took me a quarter of the book to recognize a journalist with an attitude.
As a field, knowledge representation has often been accused of being off in a theoretical noman's land, removed from, and largely unrelated to, the central issues in AI. This article argues that recent trends in KR instead demonstrate the benefits of the interplay between science and engineering, a lesson from which all AI could benefit. This article grew out of a survey talk on the Third International Conference on Knowledge Representation and Reasoning (KR '92) (Nebel, Rich, and Swartout 1992) that I presented at the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI '93). This article is an edited version of a talk surveying that conference, which I presented at the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI '93). Although nominally a conference overview, the article attempts to summarize the state of the conference and the field with respect to the intertwined goals of science and engineering.
Clusters of conversation provide a more valuable way to spend ones time than attending sessions. At the last national meeting we escaped from the celebrations of the recent victory of Deep Blue over the dreaded Kasparov, to find just such a group, already engaged in an animated discussion: A: We need to draw a line. A: Between a program that has some intelligence in it and one that doesn't. All Deep Blue does is brute-force search. That hardly counts as AI.
We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert systems. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert system tools and components We also note that traditional programming skills continue to be required in the development of commercial expert systems Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development. We have observed during this effort that the development of a commercial expert system imposes a substantially different set of constraints and requirements in terms of characteristics and methods of development than those seen in the research environment.
A new crop of award-winning applications stood as a testimonial to the continuing inroads AI is making into business, saving hundreds of millions of dollars annually. In addition, attendees were introduced to a number of new AI technologies, such as data mining, that are coming out of the research lab and being readied for use. A new application area, knowledge publishing, came into the spotlight with a Compaq application that has shrink-wrapped its customer-service troubleshooting knowledge for networked printers. It is now shipped with the product and is saving the company an estimated $10 to $20 million annually. Just as help desks were touted as a natural fit for AI technology a few years ago, knowledge publishing is now the next great hope.
The Sixth Annual Knowledge-Based Software Engineering Conference (KBSE-91) was held at the Sheraton University Inn and Conference Center in Syracuse, New York, from Sunday afternoon, 22 September, through midday Wednesday, 25 September. The KBSE field is concerned with applying knowledge-based AI techniques to the problems of creating, understanding, and maintaining very large software systems. The Sixth Annual Knowledge-Based Software Engineering Conference (KBSE-91) was held at the Sheraton University Inn and Conference Center in Syracuse, New York, from Sunday afternoon, 22 September, through midday Wednesday, 25 September. This conference was sponsored by Rome Laboratory (previously Rome Air Development Center) and was held in cooperation with the Association for Computing Machinery and the American Association for Artificial Intelligence. The origin of KBSE-91 is as follows: In 1983, Rome Air Development Center published a report calling for the development of a knowledgebased software assistant (KBSA) that would use AI techniques to support all phases of the software development process (Green et al. 1986).
Michael R. Lowry There is substantial evidence that AI technology can meet the requirements of the large potential market that will exist for knowledge-based software engineering at the turn of the century. In this article, which forms the conclusion to the AAAI Press book Automating Software Design, edited by Michael Lowry and Robert McCartney, Michael Lowry discusses the future of software engineering, and how knowledge-based software engineering (KBSE) progress will lead to system development environments. Specifically, Lowry examines how KBSE techniques promote additive programming methods and how they can be developed and introduced in an evolutionary way. The enabling technology will come from AI, formal methods, programming language theory, and other areas of computer science. This technology will enable much of the knowledge now lost in the software development process to be captured in machineencoded form and automated.