If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The classical approach to the acquisition of knowledge and reason in artificial intelligence is to program the facts and rules into the machine. Unfortunately, the amount of time required to program the equivalent of human intelligence is prohibitively large. An alternative approach allows an automaton to learn to solve problems through iterative trial-and-error interaction with its environment, much as humans do. To solve a problem posed by the environment, the automaton generates a sequence or collection of responses based on its experience. The environment evaluates the effectiveness of this collection, and reports its evaluation to the automaton.
Editor: On "Learning Language" I was dismayed by the inclusion of William Katke's article ("Learning Language Using A Pattern Recognition Approach," Spring 1985). Usually you do an excellent job of representing "the current state of the art in Artificial Intelligence" (to quote your Editorial Policy), but I consider this article an exception. First of all, although the article claims to be on "Learning Language," what it presents is at best a knowledge-free approach to learning syntax. I saw no evidence that the induced syntax is useful for anything, and good reasons to believe that it is not, such as the unmnemonic category names and the intrinsic limitations of finite state grammars. Second, this kind of stuff has been done before, and it didn't work too well then either; for a useful overview of the field and pointers into the literature, see the article on "Grammatical Inference" in Volume 3 of The Handbook of The plete specifications and the verification of proposed impleideas and issues presented were firmly focused on a conven-mentations, we should concentrate more on incremental tional view of the design process-a view I can caricaturize development of specifications as a result of assessment of as the SPIV methodology: performance.
Organizations are adaptive systems that continually attempt to push the limits of their own effectiveness to approach perfection. This approach is true of the "mom and pop" store that is threatened by the growth of shopping malls. It is true of the gigantic corporation that is threatened by public regulation and private competition. It is particularly true of organizations that are confronted with complex tasks, the vagaries of uncertainty, and the high and visible costs of irreversible error. The cause of organization ineffectiveness or, indeed, failure is often perceived to be human frailty (Perrow 1984).
The Ninth International Conference on Machine Learning was held in Aberdeen, Scotland, from 1-3 July 1992, with 198 participants in attendance. The conference covered a broad range of topics drawn from the general area of machine learning, including concept-learning algorithms, clustering, speedup learning, formal analysis of learning systems, neural networks, genetic algorithms, and applications of machine learning. This article briefly touches on six selected talks that were of exceptional interest. Conference organizers were Derek Sleeman (conference chair) and Peter Edwards (local arrangements chair), both of the University of Aberdeen. Since the first machine-learning workshop was held at Carnegie-Mellon University (CMU) in July 1980, meetings have been held regularly, alternating between a more formal conference format and a more informal workshop format.
"Can we actually know the universe? My God, it's hard enough finding your way around Chinatown." "Know then thyself, presume not God to scan; The proper study of mankind is man." The field of AI is directed at the fundamental problem of how the mind works; its approach, among other things, is to try to simulate its working--in bits and pieces. History shows us that mankind has been trying to do this for certainly hundreds of years, but the blooming of current computer technology has sparked an explosion in the research we can now do. The center of AI is the wonderful capacity we call learning, which the field is paying increasing attention to. Learning is difficult and easy, complicated and simple, and most research doesn't look at many aspects of its complexity. However, we in the AI field are starting. Let us now celebrate the efforts of our forebears and rejoice in our own efforts, so that our successors can thrive in their research. This article is the substance, edited and ...
The First International Conference on Intelligent Systems for Molecular Biology (ISMB-93), held 6-9 July 1993 at the Lister Hill Center of the National Library of Medicine (NLM), attracted over 200 computer scientists and biologists from 13 countries. As organizers of the conference, we saw it as the culmination of a series of successful meetings and colloquia, including workshops by the American Association for Artificial Intelligence, that, taken as a whole, indicate that molecular biology is one of the most rapidly growing application areas of AI and warrants a dedicated conference. AAAI was a cosponsor of the meeting and published the proceedings (AAAI Press, Menlo Park CA, ISBN 0-929280-47-4, $45). Extensive additional support in the form of grants was provided by the National Institutes of Health (NIH), primarily through NLM but also through the Division of Computer Research and Technology, and by the Department of Energy Office of Health and Environmental Research (which, like NIH, is heavily involved in the Human Genome Project). Further support was provided by the Biomatrix Society, a group that has a predilection for AI approaches to biological data.
His proofs are ingenious, cleverly argued, quite convincing to many of his contemporaries, and utterly wrong. The Simon Newcomb Award is given annually for the silliest published argument attacking AI. Our subject may be unique in the virulence and frequency with which it is attacked, both in the popular media and among the cultured intelligentsia. Recent articles have argued that the very idea of AI reflects a cancer in the heart of our culture and have proven (yet again) that it is impossible. While many of these attacks are cited widely, most of them are ridiculous to anyone with an appropriate technical education.
Semantic integration focuses on discovering, representing, and manipulating correspondences between entities in disparate data sources. The topic has been widely studied in the context of structured data, with problems being considered including ontology and schema matching, matching relational tuples, and reconciling inconsistent data values. In recent years, however, semantic integration over text has also received increasing attention. This article studies a key challenge in semantic integration over text: identifying whether different mentions of real-world entities, such as "JFK" and "John Kennedy," within and across natural language text documents, actually represent the same concept. We present a machine-learning study of this problem.
Is Robot Learning a New Subfield? The third section argues that the machine-learning and robotics communities reflect different cultures, target domains, terminology, and acceptable proofs that result in a de facto separation. The unique constraints placed on representation by robot learning are characterized in the fourth section. Finally, we close with some concluding remarks. Learning takes place when the system makes changes to its internal structure so as to improve some metric on its long-term future performance, as measured by a fixed standard (Russell 1991, p. 141).
Restricting the number of potential readers is unfortunate because an interdisciplinary view of the world around us must be developed. This book should have been written to show a scientist with a good mathematics background how to do modeling and simulation. Scientific research needs more people trained in system concepts, people trained to understand and apply the Weltanschauung of system theory. Indeed, the recent recommendation for science education that came out of the Science for All Americans study, sponsored by the American Association for the Advancement of Science, emphasized an interdisciplinary approach to scientific concepts. By limiting the technical accessibility of this book, the author has not helped us address the need for training scientists in the use of interdisciplinary tools in scientific research.