Goto

Collaborating Authors

 Information Technology


A Method for Evaluating Candidate Expert System Applications

AI Magazine

Second, the problem domain of the used be as good as possible. Two The application task requires little task is stable. This means that the characteristics of the domain expert or no common sense. Although domain should be well established can help determine the degree of researchers are continuing to study and unlikely to undergo vast changes expertise. First, the expert is highly the representation of commonsense during the life of the expert system respected by experienced people in the knowledge, no practical systems have project. This stability does not require domain field. Because the goal of the been developed to date (Lenat, that the problem-solving process project is often to simulate the Prakash, and Shepherd 1986). A problem required to perform the task be well expert's performance, this expert requiring common sense on the understood, simply that the basics of should be viewed by others as a genuine part of the expert should be left to a the task domain be established.


Uncertainty in Artificial Intelligence

AI Magazine

The Fourth Uncertainty in Artificial Intelligence workshop was held 19-21 August 1988. The workshop featured significant developments in application of theories of representation and reasoning under uncertainty. A recurring idea at the workshop was the need to examine uncertainty calculi in the context of choosing representation, inference, and control methodologies. The effectiveness of these choices in AI systems tends to be best considered in terms of specific problem areas. These areas include automated planning, temporal reasoning, computer vision, medical diagnosis, fault detection, text analysis, distributed systems, and behavior of nonlinear systems. Influence diagrams are emerging as a unifying representation, enabling tool development. Interest and results in uncertainty in AI are growing beyond the capacity of a workshop format.


AAAI News

AI Magazine

WINTER 1988 79 Notes to Financial Statements Program Committee (reported by Reid accounting principles applied on a Smith, Program Co-Chair).


Connectionism and Information Processing Abstractions

AI Magazine

Connectionism challenges a basic assumption of much of AI, that mental processes are best viewed as algorithmic symbol manipulations. Connectionism replaces symbol structures with distributed representations in the form of weights between units. For problems close to the architecture of the underlying machines, connectionist and symbolic approaches can make different representational commitments for a task and, thus, can constitute different theories. For complex problems, however, the power of a system comes more from the content of the representations than the medium in which the representations reside. The connectionist hope of using learning to obviate explicit specification of this content is undermined by the problem of programming appropriate initial connectionist architectures so that they can in fact learn. In essence, although connectionism is a useful corrective to the view of mind as a Turing machine, for most of the central issues of intelligence, connectionism is only marginally relevant.


Review of Reasoning About Change

AI Magazine

Yoav Shoham's revised doctoral dissertation is not fully comprehensible to all readers, but it provides a good introduction to reasoning about change; the references are sometimes incomplete, however.


Review of How Machines Think: A General Introduction to Artificial Intelligence Illustrated in Prolog

AI Magazine

Nigel Ford's book purports to be both an introduction to AI and an examination of whether machines are cognizant entities.With this pairing, Ford intends to begin at the beginning, answering the question "what is AI?" and to proceed to his main thesis about whether machines can think. Unfortunately, Ford is unable to move on to the higher plane of his main thesis.


Foundations and Grand Challenges of Artificial Intelligence: AAAI Presidential Address

AI Magazine

AAAI is a society devoted to supporting the progress in science, technology and applications of AI. I thought I would use this occasion to share with you some of my thoughts on the recent advances in AI, the insights and theoretical foundations that have emerged out of the past thirty years of stable, sustained, systematic explorations in our field, and the grand challenges motivating the research in our field.


A Novel Approach to Expert Systems for Design of Large Structures

AI Magazine

A novel approach is presented for the development of expert systems for structural design problems. This approach differs from the conventional expert systems in two fundamental respects. First, mathematical optimization is introduced into the design process. Second, a computer is used to obtain parts of the knowledge necessary in the expert systems in addition to heuristics and experiential knowledge obtained from documented materials and human experts. As an example of this approach, a prototype coupled expert system, the bridge truss expert (BTExpert), is presented for optimum design of bridge trusses subjected to moving loads. BTExpert was developed by interfacing an interactive optimization program developed in Fortran 77 to an expert system shell developed in Pascal. This new generation of expert systems-embracing various advanced technologies such as AI (machine intelligence), the numeric optimization technique, and interactive computer graphics -- should find enormous practical implications.


Theoretical Issues in Conceptual Information Processing

AI Magazine

The Fifth Annual Theoretical Issues in Conceptual Information Processing Workshop took place in Washington, D.C. in June 1987. About 100 participants gathered to hear several invited talks and panels discussing the issues relating to artificial intelligence and cognitive science.


How Evaluation Guides AI Research: The Message Still Counts More than the Medium

AI Magazine

Evaluation should be a mechanism of progress both within and across AI research projects. For the individual, evaluation can tell us how and why our methods and programs work and, so, tell us how our research should proceed. For the community, evaluation expedites the understanding of available methods and, so, their integration into further research. In this article, we present a five-stage model of AI research and describe guidelines for evaluation that are appropriate for each stage. These guidelines, in the form of evaluation criteria and techniques, suggest how to perform evaluation. We conclude with a set of recommendations that suggest how to encourage the evaluation of AI research.