If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This new conference series promotes multidisciplinary research on tools and methodologies for efficiently capturing knowledge from a variety of sources and creating representations that can be (or eventually can be) useful for reasoning. The conference attracted researchers from diverse areas of AI, including knowledge representation, knowledge acquisition, intelligent user interfaces, problem solving and reasoning, planning, agents, text extraction, and machine learning. Knowledge acquisition has been a challenging area of research in AI, with its roots in early work to develop expert systems. Driven by the modern internet culture and knowledge-based industries, the study of knowledge capture has a renewed importance. Although there has been considerable work over the years in the area, activities have been distributed across several distinct research communities.
R B. Abhyankar Emphasizing theory and implementation issues more than specific applications and Prolog programming techniques, Computing with Logic Logic Programming with Prolog (The Benjamin Cummings Publishing Company, Menlo Park, Calif., 1988, 535 pp., $27 95) by David Maier and David S. Warren, respected researchers in logic programming, is a superb book Offering an in-depth treatment of advanced topics, the book also includes the necessary background material on logic and automatic theorem proving, making it self-contained. The only real prerequisite is a first course in data structures, although it would be helpful if the reader has also had a first course in program translation. The book has a wealth of exercises and would make an excellent textbook for advanced undergraduate or graduate students in computer science; it is also appropriate for programmers interested in the implementation of Prolog The book presents the concepts of logic programming using theory presentation, implementation, and application of Proplog, Datalog, and Prolog, three logic programming languages of increasing complexity that are based on horn clause subsets of propositional, predicate, and functional logic, respectively This incremental approach, unique to this book, is effective in conveying a thorough understanding of the subject The book consists of 12 chapters grouped into three parts (Part 1 chapters 1 to 3, Part 2. chapters 4 to 6, and Part 3 chapters 7 to 12), an appendix, and an index The three parts, each dealing with one of these logic programming languages, are organized the same First, the authors informally present the language using examples; an interpreter is also presented. Then the formal syntax and semantics for the language and logic are presented, along with soundness and completeness results for the logic and the effects of various search strategies Next, they give optimization techniques for the interpreter Each chapter ends with exercises, brief comments regarding the material in the chapter, and a bibliography Chapter I presents top-down and bottom-up interpreters for Proplog Chapter 2 offers a good discussion of the related notions: negation as failure, closed-world assumption, minimal models, and stratified programs Chapter 3 considers clause indexing and lazy concatenation as optimization techniques for the Proplog interpreter in chapter 1 Chapter 4 explains the connection between Datalog and relational algebra. Chapter 5 contains a proof of Herbrand's theorem for predicate logic.
The annual Workshop on the Validation and Verification of Knowledge-Based Systems is the leading forum for presenting research on the validation and verification of knowledge-based systems (KBSs). The 1994 workshop was significant in that there was a definitive move in the philosophical position of the workshop from a testing-and toolbased approach to KBS evaluation to that of a formal specification-based approach. This workshop included 12 full papers and 5 short papers and was attended by 35 researchers from government, industry, and academia. The workshop is the leading forum for presenting research on the validation and verification of knowledge-based systems (KBSs). It has influenced the evolution of the discipline from its origins in 1988; at this time, researchers were asking the questions, How can we evaluate the correctness of KBS? How is this process different from conventional system evolution?
This report stems from a workshop that was organized by the American Association for Artificial Intelligence (AAAI) and cosponsored by the Information Technology and Organizations Program of the National Science Foundation. The purpose of the workshop was twofold: first, to increase awareness among the artificial intelligence (AI) community of opportunities presented by the National Information Infrastructure (NII) activities, in particular, the Information Infrastructure and Technology Applications (IITA) component of the High Performance Computing and Communications Program; and second, to identify key contributions of research in AI to the NII and IITA. The workshop included a presentation by NSF of IITA program goals and a brief discussion of a report aimed at identifying important AI research thrusts that could support the development of twenty-first century computing systems. That report, as well as the full set of initial suggestions for it from AAAI fellows and officers, was circulated to attendees prior to the workshop. Workshop attendees identified specific contributions that AI research could make in the next decade to the technology base needed for NII/IITA and the major research challenges that had to be met.
Commentators on AI converge on two goals they believe define the field: (1) to better understand the mind by specifying computational models and (2) to construct computer systems that perform actions traditionally regarded as mental. We should recognize that AI has a third, hidden, more basic aim; that the first two goals are special cases of the third; and that the actual technical substance of AI concerns only this more basic aim. This third aim is to establish new computation-based representational media, media in which human intellect can come to express itself with different clarity and force. This article articulates this proposal by showing how the intellectual activity we label AI can be likened in revealing ways to each of five familiar technologies. AI is not about building artificial intelligences, nor is it about understanding the human mind or any other kind of mind.
Developing agents that could perceive the world, reason about what they perceive in relation to their own goals and acts, has been the Holy Grail of AI. Early attempts at such holistic intelligence (for example, SRI International's AI researchers turned their attention to component technologies for structuring a single agent, such as planning, knowledge representation, diagnosis, and learning. Although most of AI research was focused on single-agent issues, a small number of AI researchers gathered at the Massachusetts Institute of Technology Endicott House in 1980 for the First Workshop on Distributed AI. The main scientific goal of distributed AI (DAI) is to understand the principles underlying the behavior of multiple entities in the world, called agents and their interactions. The discipline is concerned with how agent interactions produce overall multiagent system (MAS) behavior.
Serving hors d'oeuvres is not as easy as it might seem! You have to move carefully between people, gently and politely offer them hors d'oeuvres, make sure that you have not forgotten to serve someone in the room, and refill the serving tray when required. These are the challenges that robots have to face in the Hors d'Oeuvres, Anyone? For the fifth year that this event has now been held, five entries took on the challenge of creating service robots who can offer hors d'oeuvres to attendees of the robot exhibition. Such robots require the ability to move safely in a crowded environment, cover a serving area, find and stop at people to offer food and interact with them, detect when more food is needed, and take the actions necessary to refill the serving tray.
AR&A techniques have been used to solve a variety of tasks, including automatic programming, constraint satisfaction, design, diagnosis, machine learning, search, planning, reasoning, game playing, scheduling, and theorem proving. The primary purpose of AR&A techniques in such settings is to overcome computational intractability. In addition, AR&A techniques are useful for accelerating learning and summarizing sets of solutions. The Fifth Symposium on Abstraction, Reformulation, and Approximation (SARA-2002) was held from 2 to 4 August 2002, directly after the Eighteenth National Conference on Artificial Intelligence (AAAI-2002). It was chaired by Sven Koenig from the Georgia Institute of Technology and Robert Holte from the University of Alberta (Canada) and held at Kananaskis Mountain Lodge, Kananaskis Village, Alberta (Canada) between Calgary and Banff in the Rocky Mountains.
In reviewing a book of this kind, it is necessary to answer three questions: (1) how important is the workshop topic, (2) how valuable are the included papers, and (3) how coherent is the volume as a whole? I address each question in turn. In the last decade, knowledgebased systems (KBSs) emerged from being a research subfield within AI to become an application software technology. Although many specific aspects of knowledge acquisition, representation, and reasoning remained active research topics, the methods and tools required to build useful and powerful KBS applications had become sufficiently well understood to facilitate the development and delivery of systems in many diverse domains. However, as organizations began to use the technology, concerns arose about the reliability of KBSs.
You are cordially invited to become a member of the AI Community's principal scientific society: Both these facts run counter to other connectionist models but easily fit SDM. Sparse Distributed Memory will be of interest to anyone doing research in neural models or brain physiology. As the theory is refined, the book will also be of interest to those trying to find applications for neural models. Finally, it will be fascinating to anyone who is even slightly curious about human intelligence and how it might arise from the brain. Terry Rooker is a graduate student at the Oregon Graduate Institute.