R B. Abhyankar Emphasizing theory and implementation issues more than specific applications and Prolog programming techniques, Computing with Logic Logic Programming with Prolog (The Benjamin Cummings Publishing Company, Menlo Park, Calif., 1988, 535 pp., $27 95) by David Maier and David S. Warren, respected researchers in logic programming, is a superb book Offering an in-depth treatment of advanced topics, the book also includes the necessary background material on logic and automatic theorem proving, making it self-contained. The only real prerequisite is a first course in data structures, although it would be helpful if the reader has also had a first course in program translation. The book has a wealth of exercises and would make an excellent textbook for advanced undergraduate or graduate students in computer science; it is also appropriate for programmers interested in the implementation of Prolog The book presents the concepts of logic programming using theory presentation, implementation, and application of Proplog, Datalog, and Prolog, three logic programming languages of increasing complexity that are based on horn clause subsets of propositional, predicate, and functional logic, respectively This incremental approach, unique to this book, is effective in conveying a thorough understanding of the subject The book consists of 12 chapters grouped into three parts (Part 1 chapters 1 to 3, Part 2. chapters 4 to 6, and Part 3 chapters 7 to 12), an appendix, and an index The three parts, each dealing with one of these logic programming languages, are organized the same First, the authors informally present the language using examples; an interpreter is also presented. Then the formal syntax and semantics for the language and logic are presented, along with soundness and completeness results for the logic and the effects of various search strategies Next, they give optimization techniques for the interpreter Each chapter ends with exercises, brief comments regarding the material in the chapter, and a bibliography Chapter I presents top-down and bottom-up interpreters for Proplog Chapter 2 offers a good discussion of the related notions: negation as failure, closed-world assumption, minimal models, and stratified programs Chapter 3 considers clause indexing and lazy concatenation as optimization techniques for the Proplog interpreter in chapter 1 Chapter 4 explains the connection between Datalog and relational algebra. Chapter 5 contains a proof of Herbrand's theorem for predicate logic.

The workshop was organized by Jack Minker and John McCarthy. The Program Committee members were Krzysztof Apt, John Horty, Sarit Kraus, Vladimir Lifschitz, John McCarthy, Jack Minker, Don Perlis, and Ray Reiter. The purpose of the workshop was to bring together researchers who use logic as a fundamental tool in AI to permit them to review accomplishments, assess future directions, and share their research in LBAI. This article is a summary of the workshop. The areas selected for discussion at the workshop were abductive and inductive reasoning, applications of theorem proving, commonsense reasoning, computational logic, constraints, logic and high-level robotics, logic and language, logic and planning, logic for agents and actions, logic of causation and action, logic, probability and decision theory, nonmonotonic reasoning, theories of belief, and knowledge representation.

To read the book Automated Reasoning: Thirty-Three Basic Research Problems (Prentice Hall, Englewood Cliffs, N.J., 1987, 300 pp., $11.00) by Larry Wos "it is not necessary to be an expert in mathematics or logic or computer science" (from the preface). However, even if you are such an expert, you will read it with interest, and likely, with enjoyment. The book is outstanding for its presentation of the theme. Following the introductory chapter, Wos discusses some obstacles to the automation of reasoning in Chapter 2. In Chapter 3, he lists the research problems (with short descriptions) in nine groups: six problems on strategy, five on inference rules, six on demodulation, one on subsumption, three on knowledge representation, two on global approach, one on logic programming, two on self-analysis, and six on other areas. After a short review of automated reasoning (AR) in Chapter 4, these problems are discussed in detail in Chapter 5. Chapter 6 gives some sets of test problems, all concerning a mathematical discipline.

What Ray Reiter has done has taken a set of ideas worked out by him and his collaborators over the last 11 years and recrystallized them into a sustained and consistent presentation. This is not a collection of those papers but a complete rewrite that avoids the usual repetition and notational inconsistency that one might expect. It makes one wish everyone as prolific as Reiter would copy his example--but because that's unlikely, we must be grateful for what he has given us. In case you haven't heard, Reiter and his crew, starting with the publication of Reiter (1991), breathed new life into the situation calculus (Mc-Carthy and Hayes 1969) that had gotten the reputation of being of limited expressiveness. The basic concept of the calculus is, of course, the situation, which we can think of as a state of affairs, that is, a complete specification of the truth values of all propositions (in a suitable logical language), although that's closer to McCarthy's and Hayes's traditional formulation than the analysis Reiter settles on (which I describe later).

At a workshop held in Toulouse, France, in 1977, Gallaire, Minker, and Nicolas stated that logic and databases was a field in its own right. This was the first time that this designation was made. The impetus for it started approximately 20 years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France. In this article, I provide an assessment about what has been achieved in the 20 years since the field started as a distinct discipline. Prominent among developments was work by Levien and Maron (1965) and Kuhns (1967), and by Green and Raphael (1968a), who were the first to realize the importance of the Robinson (1965) resolution principle for databases.

A basic promise of AI research is that what we observe as human intelligence is in fact a computation either directly or as an emergent effect. An attempt at classifying and distinguishing types of AI researchers was to call them all either scruffy (those that wrote code and implemented systems) or neat (those that base AI on some formalism like first order predicate calculus). Out of necessity, researchers tend to focus on a particular aspect of intelligence to simulate. When this is done, the effect is to restrict the class of computations that are being considered. The goal is build pieces of intelligence.

In the past, BDDs have significantly improved the performance of algorithms and enabled the solution of new classes of problems in areas such as formal verification and logic synthesis (see, for example, Burch et al. [1992]). Surprisingly, BDDs have only recently been introduced to implement the solution of planning problems (Cimatti et al. 1997). The goal of our project was to investigate whether BDDs might also help to increase the efficiency of algorithms solving problems in the field of reasoning about action and change. For a start, I have implemented the solution of fluent calculus planning problems restricted to deterministic actions and propositional fluents (Hölldobler and Störr 2000; Störr 2001). The idea of BDDs is similar to decision trees: A Boolean function is represented as a rooted acyclic-directed graph.