Goto

Collaborating Authors

 shogi




Python Agent in Ludii

Neto, Izaias S. de Lima, Vieira, Marco A. A. de Aguiar, Tavares, Anderson R.

arXiv.org Artificial Intelligence

Ludii is a Java general game system with a considerable number of board games, with an API for developing new agents and a game description language to create new games. To improve versatility and ease development, we provide Python interfaces for agent programming. This allows the use of Python modules to implement general game playing agents. As a means of enabling Python for creating Ludii agents, the interfaces are implemented using different Java libraries: jpy and Py4J. The main goal of this work is to determine which version is faster. To do so, we conducted a performance analysis of two different GGP algorithms, Minimax adapted to GGP and MCTS. The analysis was performed across several combinatorial games with varying depth, branching factor, and ply time. For reproducibility, we provide tutorials and repositories. Our analysis includes predictive models using regression, which suggest that jpy is faster than Py4J, however slower than a native Java Ludii agent, as expected.


Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning

Koyamada, Sotetsu, Okano, Shinri, Nishimori, Soichiro, Murata, Yu, Habara, Keigo, Kita, Haruka, Ishii, Shin

arXiv.org Artificial Intelligence

We propose Pgx, a suite of board game reinforcement learning (RL) environments written in JAX and optimized for GPU/TPU accelerators. By leveraging JAX's auto-vectorization and parallelization over accelerators, Pgx can efficiently scale to thousands of simultaneous simulations over accelerators. In our experiments on a DGX-A100 workstation, we discovered that Pgx can simulate RL environments 10-100x faster than existing implementations available in Python. Pgx includes RL environments commonly used as benchmarks in RL research, such as backgammon, chess, shogi, and Go. Additionally, Pgx offers miniature game sets and baseline models to facilitate rapid research cycles. We demonstrate the efficient training of the Gumbel AlphaZero algorithm with Pgx environments. Overall, Pgx provides high-performance environment simulators for researchers to accelerate their RL experiments. Pgx is available at https://github.com/sotetsuk/pgx.


Tokyo hosts Artificial Intelligence exhibition to grab business opportunities

#artificialintelligence

Tokyo [Japan], May 27 (ANI): An Artificial Intelligence exhibition was recently organized in Tokyo showcasing the latest advancements in technology. Multiple software companies developing Artificial Intelligence exhibited their technology and inventions to guests and visitors hoping to grab a bigger business opportunity. An official of NTQ Japan, a Vietnamese company which mainly develops AI for cinema and image processing, Shuta Karakawa said, "To be honest, AI technology is the same quality anywhere in the world. Vietnam and other countries are proud of that. As the quality of the technology is the same, I think it would be a great advantage to use lower labour costs than Japanese AI companies."


DeepMind Trains AI Agents To Play Games Without Human Interaction Data

#artificialintelligence

In its latest step towards general-purpose AI systems, DeepMind has proposed XLand, a virtual environment, to formulate new learning algorithms, which control how agent trains and the games on which it trains. XLand was introduced via a paper titled, "Open-Ended Learning Leads to Generally Capable Agents", in which DeepMind researchers demonstrated a technique to train an agent capable of playing many different games without requiring human interaction data The repetitive process of trial and error has proven effective in teaching computer systems to play many games, including chess, shogi, Go, and StarCraft II. However, one of the main challenges with reinforcement learning-trained systems is a lack of training data. Systems trained by reinforcement learning are unable to adapt their learned behaviours to new tasks because they are not trained on a broad enough set of tasks. For instance, AlphaZero performed well against some of the world's best chess, shogi, and Go programmes even though it was aware of only the game's basic rules.


Deep Learning for General Game Playing with Ludii and Polygames

Soemers, Dennis J. N. J., Mella, Vegard, Browne, Cameron, Teytaud, Olivier

arXiv.org Artificial Intelligence

Combinations of Monte-Carlo tree search and Deep Neural Networks, trained through self-play, have produced state-of-the-art results for automated game-playing in many board games. The training and search algorithms are not game-specific, but every individual game that these approaches are applied to still requires domain knowledge for the implementation of the game's rules, and constructing the neural network's architecture -- in particular the shapes of its input and output tensors. Ludii is a general game system that already contains over 500 different games, which can rapidly grow thanks to its powerful and user-friendly game description language. Polygames is a framework with training and search algorithms, which has already produced superhuman players for several board games. This paper describes the implementation of a bridge between Ludii and Polygames, which enables Polygames to train and evaluate models for games that are implemented and run through Ludii. We do not require any game-specific domain knowledge anymore, and instead leverage our domain knowledge of the Ludii system and its abstract state and move representations to write functions that can automatically determine the appropriate shapes for input and output tensors for any game implemented in Ludii. We describe experimental results for short training runs in a wide variety of different board games, and discuss several open problems and avenues for future research.


DeepMind's latest AI can master games without being told their rules

Engadget

In 2016, Alphabet's DeepMind came out with AlphaGo, an AI which consistently beat the best human Go players. One year later, the subsidiary went on to refine its work, creating AlphaGo Zero. Where its predecessor learned to play Go by observing amateur and professional matches, AlphaGo Zero mastered the ancient game by simply playing against itself. DeepMind then created AlphaZero, which could play Go, chess and shogi with a single algorithm. What tied all those AIs together is that they knew the rules of the games they had to master going into their training.


Shogi and Artificial Intelligence Discuss Japan-Japan Foreign Policy Forum

#artificialintelligence

The waves of the third artificial intelligence (AI) boom are now sweeping across Japan in the same way as earlier fads did in the 1950s and the 1980s. Referring to the ongoing craze in the country, leading Japanese economic magazine Shukan toyo keizai wrote in its 5 December 2015 issue, "not a single day passes by without hearing about AI." Many companies in Japan are making AI-related announcements one after another. Seminars on AI are held in Tokyo almost every day. But the question we must ask is this: Is the development of AI good news for mankind? From early on, many people in the world outside Japan forecast a dystopian future if AI were to surpass human intelligence. To cite an early example, Bill Joy, a U.S. computer scientist dubbed the Thomas Edison of the Internet, cautioned that robots with higher intelligence may compete with humans and threaten the latter's survival when they become able to self-replicate in "Why the Future Doesn't Need Us," an article he published in 2000. More recently, British theoretical physicist and cosmologist Stephen Hawking expressed the fear that "the development of full artificial intelligence could spell the end of the human race." Speaking in concert, Microsoft founder Bill Gates also said, "I am in the camp that is concerned about the threat of super intelligence [to human beings]." Behind their concern, there is the feeling of unease that humans will stop being the owners of the highest intelligence on earth. High intelligence is the very thing that has allowed humans to consider themselves as special beings distinguished from other animals. What will happen if and when AI surpasses human intelligence? Will humans really be able to continue their dominance as rulers of the earth in this situation? Won't machines deprive humans of many intellectual jobs and dominate them, in effect? These arguments about the possible threats posed by AI have been small in number in Japan until recently, however.


DeepMind Unveils MuZero, a New Agent that Mastered Chess, Shogi, Atari and Go Without Knowing the Rules Plow

#artificialintelligence

Games have become one of most efficient vehicles for evaluating artificial intelligence(AI) algorithms. For decades, games have built complex competition, collaboration, planning and strategic dynamics that are a reflection of the most sophisticated tasks that AI agents face in the real world. From Chess, to Go to StarCraft, games have become a great lab to evaluate the capabilities of AI agents in a safe and responsible manner. However, most of those great milestones started with agents that were trained on the rules of the game. There is a complementary subset of scenarios in which agents are presented with a new environment without prior knowledge of its dynamics.