Goto

Collaborating Authors

Results


Search in Imperfect Information Games

arXiv.org Artificial Intelligence

From the very dawn of the field, search with value functions was a fundamental concept of computer games research. Turing's chess algorithm from 1950 was able to think two moves ahead, and Shannon's work on chess from $1950$ includes an extensive section on evaluation functions to be used within a search. Samuel's checkers program from 1959 already combines search and value functions that are learned through self-play and bootstrapping. TD-Gammon improves upon those ideas and uses neural networks to learn those complex value functions -- only to be again used within search. The combination of decision-time search and value functions has been present in the remarkable milestones where computers bested their human counterparts in long standing challenging games -- DeepBlue for Chess and AlphaGo for Go. Until recently, this powerful framework of search aided with (learned) value functions has been limited to perfect information games. As many interesting problems do not provide the agent perfect information of the environment, this was an unfortunate limitation. This thesis introduces the reader to sound search for imperfect information games.


Chess AI: Competing Paradigms for Machine Intelligence

arXiv.org Artificial Intelligence

Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett's Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman's equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.


Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.


Optimizing Selective Search in Chess

arXiv.org Artificial Intelligence

In this paper we introduce a novel method for automatically tuning the search parameters of a chess program using genetic algorithms. Our results show that a large set of parameter values can be learned automatically, such that the resulting performance is comparable with that of manually tuned parameters of top tournament-playing chess programs.


Verified Null-Move Pruning

arXiv.org Artificial Intelligence

In this article we review standard null-move pruning and introduce our extended version of it, which we call verified null-move pruning. In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth. Our experiments with verified null-move pruning show that on average, it constructs a smaller search tree with greater tactical strength in comparison to standard null-move pruning. Moreover, unlike standard null-move pruning, which fails badly in zugzwang positions, verified null-move pruning manages to detect most zugzwangs and in such cases conducts a re-search to obtain the correct result. In addition, verified null-move pruning is very easy to implement, and any standard null-move pruning program can use verified null-move pruning by modifying only a few lines of code.


A Gamut of Games

AI Magazine

In Shannon's time, it would have seemed Around this time, Arthur Samuel began work the capabilities of computational intelligence. By 1958, Alan Newell and Herb Simon the game world with the real world--the game had begun their investigations into chess, of life--where the rules often change, the which eventually led to fundamental results scope of the problem is almost limitless, and for AI and cognitive science (Newell, Shaw, and the participants interact in an infinite number Simon 1958). An impressive lineup to say the of ways. Games can be a microcosm of the real least! Indeed, one of the early goals of AI was to and chess programs could play at a build a program capable of defeating the level comparable to the human world champion. This These remarkable accomplishments are the challenge proved to be more difficult than was result of a better understanding of the anticipated; the AI literature is replete with problems being solved, major algorithmic optimistic predictions. It eventually took insights, and tremendous advances in hardware almost 50 years to complete the task--a technology. The work on computer remarkably short time when one considers the games has been one of the most successful and software and hardware advances needed to visible results of AI research. The results are truly of the progress in building a world-class amazing. Even though there is an exponential program for the game is given, along with a difference between the best case and the brief description of the strongest program. The histories are necessarily case (Plaat et al. 1996). Games reports the past successes where computers realizing the lineage of the ideas.


Foundations and Grand Challenges of Artificial Intelligence: AAAI Presidential Address

AI Magazine

AAAI is a society devoted to supporting the progress in science, technology and applications of AI. I thought I would use this occasion to share with you some of my thoughts on the recent advances in AI, the insights and theoretical foundations that have emerged out of the past thirty years of stable, sustained, systematic explorations in our field, and the grand challenges motivating the research in our field.