Media
The Mind at AI: Horseless Carriage to Clock
Commentators on AI converge on two goals they believe define the field: (1) to better understand the mind by specifying computational models and (2) to construct computer systems that perform actions traditionally regarded as mental. We should recognize that AI has a third, hidden, more basic aim; that the first two goals are special cases of the third; and that the actual technical substance of AI concerns only this more basic aim. This third aim is to establish new computation-based representational media, media in which human intellect can come to express itself with different clarity and force. This article articulates this proposal by showing how the intellectual activity we label AI can be likened in revealing ways to each of five familiar technologies.
The Yale Artificial Intelligence Project: A Brief History
In the restaurant script, notated as $RESTAURANT, the roles might directly to the United Press International Yale researchers explored intentionality include customer, waitress, and cook; news wire and could skim news One of the earliest programs to the props could be a menu, table, and stories in dozens of different domains, embody goals and plans within the silverware; the locations could be the and produce summaries in several languages. CD paradigm was Jim Meehan's bar, dining area, and kitchen; and the On the DEC-20 (which by TALESPIN, which made up stories events would include arriving, seating, 1978 had replaced the PDP-101, similar to the fables of Aesop.
Cognitive Expert Systems and Machine Learning: Artificial Intelligence Research at the University of Connecticut
Selfridge, Mallory, Dickerson, Donald J., Biggs, Stanley F.
In order for next-generation expert systems to demonstrate the performance, robustness, flexibility, and learning ability of human experts, they will have to be based on cognitive models of expert human reasoning and learning. We call such next-generation systems cognitive expert systems. Research at the Artificial Intelligence Laboratory at the University of Connecticut is directed toward understanding the principles underlying cognitive expert systems and developing computer programs embodying those principles. The Causal Model Acquisition System (CMACS) learns causal models of physical mechanisms by understanding real-world natural language explanations of those mechanisms. The going Concern Expert ( GCX) uses business and environmental knowledge to assess whether a company will remain in business for at least the following year. The Business Information System (BIS) acquires business and environmental knowledge from in-depth reading of real-world news stories. These systems are based on theories of expert human reasoning and learning, and thus represent steps toward next-generation cognitive expert systems.
A Question of Responsibility
In 1940, a 20-year-old science fiction fan from Brooklyn found that he was growing tired of stories that endlessly repeated the myths of Frankenstein and Faust: Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator-ad nauseum. So he began writing robot stories of his own. "[They were] robot stories of a new variety," he recalls. "Never, never was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their'brains' from the moment of construction. " In particular, he imagined that each robot's artificial brain would be imprinted with three engineering safeguards, three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. The young writer's name, of course, was Isaac Asimov (1964), and the robot stories he began writing that year have become classics of science fiction, the standards by which others are judged. Indeed, because of Asimov one almost never reads about robots turning mindlessly on their masters anymore. But the legends of Frankenstein and Faust are subtle ones, and as the world knows too well, engineering rationality is not always the same thing as wisdom. M Mitchell Waldrop is a reporter for Science Magazine, 1333 H Street N.W., Washington D C. 2COO5. Reprinted by permission of the publisher.
Letters to the Editor
Mostow, Jack, Katke, William, Partridge, Derek, Koton, Phyllis, Estrin, Deborah, Gray, Sharon, Ladin, Rivka, Eisenberg, Mike, Duffy, Gavin, Dorr, Bonnie, Batali, John, Levitt, David, Shirley, Mark, Giansiracusa, Robert, Montalvo, Fanya, Pitman, Kent, Golden, Ellen, Stone, Bob
And even if verification to be accommodated within the SPIV paradigm. But until were possible it would not contribute very much to the such time as we find these learning algorithms (and I development of production software. Hence "verifiability don't think that many would argue that such algorithms must not be allowed to overshadow reliability. Scientists will be available in the foreseeable future) we must face should not confuse mathematical models with reality." the prospect of systems that will need to be modified, in AI is perhaps not so special, it is rather an extreme nontrivial ways, throughout their useful lives. Thus incremental and thus certain of its characteristics are more obvious development will be a constant feature of such than in conventional software applications. Thus the SPIV software and if it is not fully automatic then it will be part methodology may be inappropriate for an even larger class of the human maintenance of the system. I am, of course, of problems than those of AI. not suggesting that the products of say architectural design I have raised all these points not to try to deny the (i.e., buildings) will need a learning capability. Nevertheless, worth of Mostow's ideas and issues concerning the design a final fixed design, that remains "optimal" in a process, but to make the case that such endeavors should dynamically changing world, is a rare event.The similarity also be pursued within a fundamentally incremental and between AI system development and the design of more evolutionary framework for design. The potential of the concrete objects is still present, but it is, in some respects, RUDE paradigm is deserving of more attention than it is rather tenuous I admit.
An AIer's Lament
It is interesting to note that there is no agreed upon definition of artificial intelligence. Why is this interesting? Because government agencies ask for it, software shops claim to provide it, popular magazines and newspapers publish articles about it, dreamers base their fantasies on it, and pragmatists criticize and denounce it. Such a state of affairs has persisted since Newell, Simon and Shaw wrote their first chess program and proclaimed that in a few years, a computer would be the world champion. Not knowing exactly what we are talking about or expecting is typical of a new field; for example, witness the chaos that centered around program verification of security related aspects of systems a few years ago. The details are too grim to recount in mixed company. However, artificial intelligence has been around for 30 years, so one might wonder why our wheels are still spinning. Below, an attempt is made to answer this question and show why, in a serious sense, artificial intelligence can never demonstrate an outright success within its own discipline. In addition, we will see why the old bromide that "as soon as we understand how to solve a problem, it's no longer artificial intelligence" is necessarily true.
Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project
Buchanan, Bruce G., Shortliffe, Edward H.
Artificial intelligence, or AI, is largely an experimental science—at least as much progress has been made by building and analyzing programs as by examining theoretical questions. MYCIN is one of several well-known programs that embody some intelligence and provide data on the extent to which intelligent behavior can be programmed. As with other AI programs, its development was slow and not always in a forward direction. But we feel we learned some useful lessons in the course of nearly a decade of work on MYCIN and related programs. In this book we share the results of many experiments performed in that time, and we try to paint a coherent picture of the work. The book is intended to be a critical analysis of several pieces of related research, performed by a large number of scientists. We believe that the whole field of AI will benefit from such attempts to take a detailed retrospective look at experiments, for in this way the scientific foundations of the field will gradually be defined. It is for all these reasons that we have prepared this analysis of the MYCIN experiments.
The complete book in a single file.
Why People Think Computers Can't
Today, surrounded by so many automatic machines industrial robots, and the R2-D2's of Star wars movies, most people think AI is much more advanced than it is. But still, many "computer experts" don't believe that machines will ever "really think." I think those specialists are too used to explaining that there's nothing inside computers but little electric currents. And there are many other reasons why so many experts still maintain that machines can never be creative, intuitive, or emotional, and will never really think, believe, or understand anything.