Russell, Stuart (University of California, Berkeley) | Dietterich, Tom (Oregon State University) | Horvitz, Eric (Microsoft) | Selman, Bart (Cornell University) | Rossi, Francesca (University of Padova) | Hassabis, Demis (DeepMind) | Legg, Shane (DeepMind) | Suleyman, Mustafa (DeepMind) | George, Dileep (Vicarious) | Phoenix, Scott (Vicarious)
The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
Long, Derek, Kautz, Henry, Selman, Bart, Bonet, Blai, Geffner, Hector, Koehler, Jana, Brenner, Michael, Hoffmann, Joerg, Rittinger, Frank, Anderson, Corin R., Weld, Daniel S., Smith, David E., Fox, Maria, Long, Derek
In 1998, the international planning community was invited to take part in the first planning competition, hosted by the Artificial Intelligence Planning Systems Conference, to provide a new impetus for empirical evaluation and direct comparison of automatic domain-independent planning systems. This article describes the systems that competed in the event, examines the results, and considers some of the implications for the future of the field.
Mitchell, David | Selman, Bart
The first computational task shown to be NPhard, by Cook (1971) was propositional satisfiability or SAT: given a formula of the propositional calculus, decide if there is an assignment to its variables that makes the formula true according to the usual rules of interpretation. In this paper, we present empirical results showing that random instances of satisfiability can be generated in such a way that easy and hard sets of instances (for a particular SAT procedure, anyway) are predictable in advance. We believe this was a good choice for two reasons: First, it has been shown to be a variant of resolution (Vellino 1989, Galil 1977), the most widely used general reasoning method in AI; second, almost all empirical work on SAT testing has used one or another refinement of this method, which facilitates comparison. It performs a backtracking depth-first search in the space of all truth assignments, incrementally assigning truth values to variables and simplifying the formula.
Mitchell, David | Selman, Bart
"We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, N-queens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a model-finding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks." Proc. AAAI-92.