Buro, Michael


Reports of the Workshops Held at the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

AI Magazine

The AIIDE-14 Workshop program was held Friday and Saturday, October 3–4, 2014 at North Carolina State University in Raleigh, North Carolina. The workshop program included five workshops covering a wide range of topics. The titles of the workshops held Friday were Games and Natural Language Processing, and Artificial Intelligence in Adversarial Real-Time Games. The titles of the workshops held Saturday were Diversity in Games Research, Experimental Artificial Intelligence in Games, and Musical Metacreation.


The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

AI Magazine

The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) was held October 8-12, 2012, at Stanford University in Palo Alto, California. The conference included a research and industry track as well as a demonstration program. The conference featured 16 technical papers, 16 posters, and one demonstration, along with invited speakers, the StarCraft Ai competition, a newly-introduced Doctoral Consortium, and 5 workshops.


The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

AI Magazine

The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) was held October 8-12, 2012, at Stanford University in Palo Alto, California. The conference included a research and industry track as well as a demonstration program. The conference featured 16 technical papers, 16 posters, and one demonstration, along with invited speakers, the StarCraft Ai competition, a newly-introduced Doctoral Consortium, and 5 workshops. This report summarizes the activities of the conference.


Real-Time Strategy Game Competitions

AI Magazine

In recent years, real-time strategy (RTS) games have gained attention in the AI research community for their multitude of challenging and relevant real-time decision problems that have to be solved in order to win against human experts or to effectively collaborate with other players in team-games. In this article we motivate research in this area, give an overview of past RTS game AI competitions, and discuss future directions.


Alpha-Beta Pruning for Games with Simultaneous Moves

AAAI Conferences

Alpha-Beta pruning is one of the most powerful and fundamental MiniMax search improvements. It was designed for sequential two-player zero-sum perfect information games. In this paper we introduce an Alpha-Beta-like sound pruning method for the more general class of “stacked matrix games” that allow for simultaneous moves by both players. This is accomplished by maintaining upper and lower bounds for achievable payoffs in states with simultaneous actions and dominated action pruning based on the feasibility of certain linear programs. Empirical data shows considerable savings in terms of expanded nodes compared to naive depth-first move computation without pruning.


Recap of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE)

AI Magazine

The Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment was held from October 11–14, 2011, on the campus of Stanford University near Palo Alto, California. This report summarizes the conference and related activities.


Improving State Evaluation, Inference, and Search in Trick-Based Card Games

AAAI Conferences

Skat is Germany's national card game played by millions of players around the world. In this paper, we present the world's first computer skat player that plays at the level of human experts. This performance is achieved by improving state evaluations using game data produced by human players and by using these state evaluations to perform inference on the unobserved hands of opposing players. Our results demonstrate the gains from adding inference to an imperfect information game player and show that training on data from average human players can result in expert-level playing strength.