Kusumoto, Mitsuru, Yahata, Keisuke, Sakai, Masahiro

The problem-solving in automated theorem proving (ATP) can be interpreted as a search problem where the prover constructs a proof tree step by step. In this paper, we propose a deep reinforcement learning algorithm for proof search in intuitionistic propositional logic. The most significant challenge in the application of deep learning to the ATP is the absence of large, public theorem database. We, however, overcame this issue by applying a novel data augmentation procedure at each iteration of the reinforcement learning. We also improve the efficiency of the algorithm by representing the syntactic structure of formulas by a novel compact graph representation. Using the large volume of augmented data, we train highly accurate graph neural networks that approximate the value function for the set of the syntactic structures of formulas. Our method is also cost-efficient in terms of computational time. We will show that our prover outperforms Coq's $\texttt{tauto}$ tactic, a prover based on human-engineered heuristics. Within the specified time limit, our prover solved 84% of the theorems in a benchmark library, while $\texttt{tauto}$ was able to solve only 52%.

R B. Abhyankar Emphasizing theory and implementation issues more than specific applications and Prolog programming techniques, Computing with Logic Logic Programming with Prolog (The Benjamin Cummings Publishing Company, Menlo Park, Calif., 1988, 535 pp., $27 95) by David Maier and David S. Warren, respected researchers in logic programming, is a superb book Offering an in-depth treatment of advanced topics, the book also includes the necessary background material on logic and automatic theorem proving, making it self-contained. The only real prerequisite is a first course in data structures, although it would be helpful if the reader has also had a first course in program translation. The book has a wealth of exercises and would make an excellent textbook for advanced undergraduate or graduate students in computer science; it is also appropriate for programmers interested in the implementation of Prolog The book presents the concepts of logic programming using theory presentation, implementation, and application of Proplog, Datalog, and Prolog, three logic programming languages of increasing complexity that are based on horn clause subsets of propositional, predicate, and functional logic, respectively This incremental approach, unique to this book, is effective in conveying a thorough understanding of the subject The book consists of 12 chapters grouped into three parts (Part 1 chapters 1 to 3, Part 2. chapters 4 to 6, and Part 3 chapters 7 to 12), an appendix, and an index The three parts, each dealing with one of these logic programming languages, are organized the same First, the authors informally present the language using examples; an interpreter is also presented. Then the formal syntax and semantics for the language and logic are presented, along with soundness and completeness results for the logic and the effects of various search strategies Next, they give optimization techniques for the interpreter Each chapter ends with exercises, brief comments regarding the material in the chapter, and a bibliography Chapter I presents top-down and bottom-up interpreters for Proplog Chapter 2 offers a good discussion of the related notions: negation as failure, closed-world assumption, minimal models, and stratified programs Chapter 3 considers clause indexing and lazy concatenation as optimization techniques for the Proplog interpreter in chapter 1 Chapter 4 explains the connection between Datalog and relational algebra. Chapter 5 contains a proof of Herbrand's theorem for predicate logic.

The eight sections are (1) Artificial Intelligence (introductory material); (2) Problem-Solving (search and game playing); (3) Knowledge and Reasoning (propositional and predicate logic, inference techniques, knowledge representation); (4) Acting Logically (planning); (5) Uncertain Knowledge and Reasoning (probabilistic reasoning, Bayesian nets, decision-theoretic techniques); (6) Learning (inductive learning, neural nets, reinforcement learning); (7) Communicating, Perceiving, and Acting (natural language processing, computer vision, robotics); and (8) Conclusions (philosophical foundations and summary). What makes this textbook so good? First, it is remarkably comprehensive. In the preface, the authors suggest several alternative paths through the book that could serve as the basis of a one-semester course. At the University of Pittsburgh, my colleagues and I cover roughly the first half of the book (Sections 1-4) in the firstsemester introductory graduate AI course, covering most of Sections 5 through 8 in a second-semester course.

Benzmüller, Christoph, Paleo, Bruno Woltzenlogel

G\"odel's ontological proof has been analysed for the first-time with an unprecedent degree of detail and formality with the help of higher-order theorem provers. The following has been done (and in this order): A detailed natural deduction proof. A formalization of the axioms, definitions and theorems in the TPTP THF syntax. Automatic verification of the consistency of the axioms and definitions with Nitpick. Automatic demonstration of the theorems with the provers LEO-II and Satallax. A step-by-step formalization using the Coq proof assistant. A formalization using the Isabelle proof assistant, where the theorems (and some additional lemmata) have been automated with Sledgehammer and Metis.

I had a fun ride attending a very interesting lecture this semester called Programming Paradigms. I learned about the four main paradigms that exist: imperative, object-oriented, functional and logic programming. Now, I'm sure every developer has heard about imperative, OO and functional, but to be honest I had no idea what logic programming was about. I was intrigued, what could this paradigm I had never heard about be, what does it excel in and could it be useful for day-to-day programming problems? The book The Pragmatic Programmer has a tip called "Invest Regularly in Your Knowledge Portfolio": Different languages solve the same problems in different ways.

Reyes, Maritza (University of Texas at Austin) | Perez, Cynthia (Texas Tech University) | Upchurch, Rocky (New Deal High School, Lubbock, Texas) | Yuen, Timothy (University of Texas at San Antonio) | Zhang, Yuanlin (Texas Tech University)

This paper discusses the design of an introductory computer science course for high school students using declarative programming. Though not often taught at the K-12 level, declarative programming is a viable paradigm for teaching computer science due to its importance in artificial intelligence and in helping student explore and understand problem spaces. This paper describes the authors' implementation of a declarative programming course for high school students during a 4-week summer session.

Maslan, Nicole (Claremont McKenna College) | Roemmele, Melissa (University of Southern California) | Gordon, Andrew S. (University of Southern California)

We present a new set of challenge problems for the logical formalization of commonsense knowledge, called Triangle-COPA. This set of one hundred problems is smaller than other recent commonsense reasoning question sets, but is unique in that it is specifically designed to support the development of logic-based commonsense theories, via two means. First, questions and potential answers are encoded in logical form using a fixed vocabulary of predicates, eliminating the need for sophisticated natural language processing pipelines. Second, the domain of the questions is tightly constrained so as to focus formalization efforts on one area of inference, namely the commonsense reasoning that people do about human psychology. We describe the authoring methodology used to create this problem set, and our analysis of the scope of requisite commonsense knowledge. We then show an example of how problems can be solved using an implementation of weighted abduction.