Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
Review of Heuristics: Intelligent Search Strategies for Computer Problem Solving
Levitt, Tod S., Horvitz, Eric J.
To fully appreciate Professor Pearl's book, begin with a and the numerous techniques for representing knowledge careful reading of the title. It is a book about "..Intelligent-and uncertainty in common use in mainstream AI. ..Strategies.." for the discovery and use of "Heuristics.. " Chapter 5 begins a quantitative performance analysis of to allow computers to solve ".. Search.. ' ' problems. This includes a nice exposition on is a critical component in AI programs (Nilsson 1980, Barr branching processes, although the mathematically unsophisticated and Feigenbaum 1982), and in this sense Pearl's book is a reader may find it difficult. Here Pearl introduces strong contribution to the field of AI. It serves as an excellent probabilistic models to complement probabilistic heuristics. As a book about search, it is thorough, at analysis of search heuristics, and to a probabilistic analysis the state of the art, and contains expositions that will delight of nonadmissible heuristics in ...
Intelligent-Machine Research at CESAR
The Oak Ridge National Laboratory (ORNL) Center for Engineering Systems Advanced Research (CESAR) is a national center for multidisciplinary long-range research and development (R&D) in machine intelligence and advanced control theory. Intelligent machines (including sensor-based robots) can be viewed as artificially created operational systems capable of autonomous decision making and action. One goal of the research is autonomous remote operations in hazardous environments. This review describes highlights of CESAR research through 1986 and alludes to future plans.
The AAAI-86 Conference Exhibits: New Directions for Commercial Artificial Intelligence
The annual conference of the Association for the Advancement of Artificial Intelligence (AAAI) is the premier U.S. gathering for artificial intelligence (AI) theoreticians and practitioners. On the commercial side, AAAI is the only event with a comprehensive exhibition that includes most significant U.S. vendors of AI products and services. In 1986 some 5100 people attended AAAI- a very good showing considering that the 1987 International Joint Conference on Artificial Intelligence (IJCAI) drew about the same number of people even with its substantial international support. The commercial exhibits at AAAI-86 (110 exhibitors; 80,000 square feet) gave us opportunity to take a snapshot of an industry in transition. What I saw was a dramatic increase in the commercialization of AI technology and a decrease in the mystique, smoke, and hype. A preliminary tour of the AAAI-86 exhibits indicated that participants could expect substantial changes from the situation at IJCAI-85.
Yanli: A Powerful Natural Language Front-End Tool
An important issue in achieving acceptance of computer systems used by the nonprogramming community is the ability to communicate with these systems in natural language. Often, a great deal of time in the design of any such system is devoted to the natural language front end. An obvious way to simplify this task is to provide a portable natural language front-end tool or facility that is sophisticated enough to allow for a reasonable variety of input; allows modification; and, yet, is easy to use. This paper describes such a tool that is based on augmented transition networks (ATNs). It allows for user input to be in sentence or nonsentence form or both, provides a detailed parse tree that the user can access, and also provides the facility to generate responses and save information. The system provides a set of ATNs or allows the user to construct ATNs using system utilities. The system is written in Franz Lisp and was developed on a DEC VAX 11/780 running the ULTRIX-32 operating system.
Workshop on the Foundations of AI: Final Report
This report makes a case for the need to examine the methodological foundations of AI. Many aspects of AI have not yet developed to a point of general agreement. The goals of AI work, the methods for achieving these goals, the presentation of results, and the assessment of claims are highly contentious issues. All aspects of AI methodology are subject to debate. The Workshop on Foundations of AI was conceived as a forum in which such a debate could proceed. This report presents the rationale behind the event, the details of the program, and finally some afterthoughts.
AAAI News
This support has in-Intelligence will be held 13-17 July 1987 in M. Tenenbaum, Chair; Ronald Brachman,:luded publicity, printing, office help, and Seattle, Washington. Typical grants AAAI-87's Technical Program will from the membership for conference iave been $5,000, although requests for up present outstanding research papers in AI. sites for 1988, 1990, and 1991. The proposal to $10,000 will be considered. Any topic in These papers will be divided into those emphasizing should be structured around the new AI science or technology is appropriate, basic research and those emphasizing five day format described elsewhere in this and anyone may volunteer to organize a applied research. Based on a predictive workshop on any topic.
A Question of Responsibility
In 1940, a 20-year-old science fiction fan from Brooklyn found that he was growing tired of stories that endlessly repeated the myths of Frankenstein and Faust: Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator-ad nauseum. So he began writing robot stories of his own. "[They were] robot stories of a new variety," he recalls. "Never, never was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their'brains' from the moment of construction. " In particular, he imagined that each robot's artificial brain would be imprinted with three engineering safeguards, three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. The young writer's name, of course, was Isaac Asimov (1964), and the robot stories he began writing that year have become classics of science fiction, the standards by which others are judged. Indeed, because of Asimov one almost never reads about robots turning mindlessly on their masters anymore. But the legends of Frankenstein and Faust are subtle ones, and as the world knows too well, engineering rationality is not always the same thing as wisdom. M Mitchell Waldrop is a reporter for Science Magazine, 1333 H Street N.W., Washington D C. 2COO5. Reprinted by permission of the publisher.
Donald A. Waterman 1936-1987
Don was one of the pioneers the checkers player, and Waterman's. of our field, whose early research built the foundation for the "His subsequent contributions to protocol analysis, to area that would later come to be labeled "knowledge based the technology of rule-based systems, and to the literature of systems" (and still later "expert systems"). Don received a B.S. in Electrical Engineering from With Don's work on production systems in his thesis, it Iowa State University in 1958, and an M.S. in Electrical was only natural that he should move to Carnegie-Mellon to Engineering from the University of California, Berkeley in work with Allen Newell after acquiring his Ph.D. in 1968. He then entered the Ph.D. program at Stanford's Al takes up the story from there: newly created Cotiputer Science Department. While at "Don came to CMU in Psychology, rather than Computer Berkeley he met a young professor named Ed Feigenbaum, Science. As with many people in AI, he had an abiding and when Feigenbaum moved to Stanford in 1965 Don became interest in understanding human cognition, although it always Ed's first Ph.D. student.
Why a Diagram is (sometimes) Worth Ten Thousand Words
When two representations are informationally equivalent, their computational efficiency depends on the information-processing operators that act on them. Two sets of operators may differ in their capabilities for recognizing patterns, in the inferences they can carry out directly, and in their control strategies (in par- ticular, the control of search). Cognitive Science 11, 65-99