Monte Carlo Go is a promising method to improve the performance of computer Go programs. This approach determines the next move to play based on many Monte Carlo samples. This paper examines the relative advantages of additional samples and enhancements for Monte Carlo Go. By parallelizing Monte Carlo Go, we could increase sample sizes by two orders of magnitude. Experimental results obtained in 9 9 Go show strong evidence that there are tradeoffs among these advantages and performance, indicating a way for Monte Carlo Go to go.
Now entering its eighth year, the Annual Computer Poker Competition (ACPC) is the premier event within the field of computer poker. With both academic and nonacademic competitors from around the world, the competition provides an open and international venue for benchmarking computer poker agents. We describe the competition's origins and evolution, current events, and winning techniques The competition has been held annually since 2006, open to all competitors, in conjunction with top-tier artificial intelligence conferences (AAAI and IJCAI). In 2006 the competition began with only 5 competitors. Since then, the total number of competitors has increased.
A new signature table technique is described together with an improved book learning procedure which is thought to be much superior to the linear polynomial method described earlier. Full use is made of the so called âalpha-betaâ pruning and several forms of forward pruning to restrict the spread of the move tree and to permit the program to look ahead to a much greater depth than it other- wise could do. While still unable to outplay checker masters, the programâs playing ability has been greatly improved.See also:IEEE XploreAnnual Review in Automatic Programming, Volume 6, Part 1, 1969, Pages 1–36Some Studies in Machine Learning Using the Game of CheckersIBM J of Research and Development ll, No.6, 1967,601
This paper demonstrates the use of genetic algorithms for evolving a grandmaster-level evaluation function for a chess program. This is achieved by combining supervised and unsupervised learning. In the supervised learning phase the organisms are evolved to mimic the behavior of human grandmasters, and in the unsupervised learning phase these evolved organisms are further improved upon by means of coevolution. While past attempts succeeded in creating a grandmaster-level program by mimicking the behavior of existing computer chess programs, this paper presents the first successful attempt at evolving a state-of-the-art evaluation function by learning only from databases of games played by humans. Our results demonstrate that the evolved program outperforms a two-time World Computer Chess Champion.
Video game virtual characters should interact with the player, each other, and the environment. However, the cost of scripting complex behaviors becomes a bottleneck in content creation. Our goal is to help game designers to more easily populate their open world with background characters that exhibit more believable behaviors. We use a cyclic scheduling model that generates dynamic schedules for the daily lives of virtual characters. The scheduler employs a tiered behavior architecture where behavior components are modular and reusable. This research validates the designer usability of an implementation of this model. We present the results of a user study that evaluates the scheduling system versus manual scripting based on three metrics of behavior creation: behavior completeness, behavior correctness and behavior implementation time. The results indicate that the behavior architecture produces more reliable behaviors and improves designer efficiency which will reduce the cost of generating more believable character behaviors.