Go


Machine Learning And The Changing Face Of Today's Data Centers

#artificialintelligence

Machine learning and Artificial intelligence have taken over data centers by storm. As racks begin to fill with ASICs, FPGAs, GPUs, and supercomputers, the face of the hyper-scale server farm seems to change. These technologies are known to provide exceptional computing power to train machine learning systems. Machine learning is a process that involves tremendous amounts of data-crunching, which is a herculean task in itself. The ultimate goal of this tiring process is to create applications that are smart and also to improve services that are already in everyday use.


AI Can Thrive in Open Societies

#artificialintelligence

According to foreign-policy experts and the defense establishment, the United States is caught in an artificial intelligence arms race with China--one with serious implications for national security. The conventional version of this story suggests that the United States is at a disadvantage because of self-imposed restraints on the collection of data and the privacy of its citizens, while China, an unrestrained surveillance state, is at an advantage. In this vision, the data that China collects will be fed into its systems, leading to more powerful AI with capabilities we can only imagine today. Since Western countries can't or won't reap such a comprehensive harvest of data from their citizens, China will win the AI arms race and dominate the next century. This idea makes for a compelling narrative, especially for those trying to justify surveillance--whether government- or corporate-run.


What is the difference between Artificial Intelligence, ML and DL?

#artificialintelligence

In the large competitive world, students of computer science engineering at B Tech College in Jaipur must know about the difference between AI, machine learning, and deep learning. For this, they must understand these complex systems at first and learn where the specific fields head, before taking it over. Artificial intelligence characterizes by various different terms that you read almost everywhere. Most of the times, It is used interchangeably by the students of Top Engineering Colleges in Jaipur. It is an algorithm assigned to taking care of input problems.


Building a Computer Mahjong Player via Deep Convolutional Neural Networks

arXiv.org Artificial Intelligence

The evaluation function for imperfect information games is always hard to define but owns a significant impact on the playing strength of a program. Deep learning has made great achievements these years, and already exceeded the top human players' level even in the game of Go. In this paper, we introduce a new data model to represent the available imperfect information on the game table, and construct a well-designed convolutional neural network for game record training. We choose the accuracy of tile discarding which is also called as the agreement rate as the benchmark for this study. Our accuracy on test data reaches 70.44%, while the state-of-art baseline is 62.1% reported by Mizukami and Tsuruoka (2015), and is significantly higher than previous trials using deep learning, which shows the promising potential of our new model. For the AI program building, besides the tile discarding strategy, we adopt similar predicting strategies for other actions such as stealing (pon, chi, and kan) and riichi. With the simple combination of these several predicting networks and without any knowledge about the concrete rules of the game, a strength evaluation is made for the resulting program on the largest Japanese Mahjong site `Tenhou'. The program has achieved a rating of around 1850, which is significantly higher than that of an average human player and of programs among past studies.


Temporal-difference learning for nonlinear value function approximation in the lazy training regime

arXiv.org Machine Learning

In recent years, deep reinforcement learning has pushed the boundaries of Artificial Intelligence to an unprecedented level, achieving what was expected to be possible only in a decade and outperforming human intelligence in a number of highly complex tasks. Paramount examples of this potential have appeared over the past few years, with such algorithms mastering games and tasks of increasing complexity, from playing Atari to learning to walk and beating world grandmasters at the game of Go [16, 23, 24, 31-33]. Such impressive success would be impossible without using neural networks to approximate value functions and / or policy functions in reinforcement learning algorithms. While neural networks, in particular deep neural networks, provide a powerful and versatile tool to approximate high dimensional functions [4, 12, 17], their intrinsic nonlinearity might also lead to trouble in training, in particular in the context of reinforcement learning. For example, it is well known that nonlinear approximation to value function might cause divergence in the classical temporal-difference learning due to instability [40].


How to build your own AlphaZero AI using Python and Keras

#artificialintelligence

How to build your own AlphaZero AI using Python and Keras Teach a machine to learn Connect4 strategy through self-play and deep learning In this article I'll attempt to cover three things: Two reasons why AlphaZero is a massive step forward for Artificial Intelligence How you can build a replica of the AlphaZero methodology to play the game Connect4 How you can adapt the code to plug in other games First, a quick note about a new platform, The Network -- a place where data scientists can find paid contract projects with businesses! AlphaGo AlphaGo Zero AlphaZero In March 2016, Deepmind's AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished. Match 3 of AlphaGo vs Lee Sedol This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further.


Sequence Modeling of Temporal Credit Assignment for Episodic Reinforcement Learning

arXiv.org Machine Learning

Recent advances in deep reinforcement learning algorithms have shown great potential and success for solving many challenging real-world problems, including Go game and robotic applications. Usually, these algorithms need a carefully designed reward function to guide training in each time step. However, in real world, it is non-trivial to design such a reward function, and the only signal available is usually obtained at the end of a trajectory, also known as the episodic reward or return. In this work, we introduce a new algorithm for temporal credit assignment, which learns to decompose the episodic return back to each time-step in the trajectory using deep neural networks. With this learned reward signal, the learning efficiency can be substantially improved for episodic reinforcement learning. In particular, we find that expressive language models such as the Transformer can be adopted for learning the importance and the dependency of states in the trajectory, therefore providing high-quality and interpretable learned reward signals. We have performed extensive experiments on a set of MuJoCo continuous locomotive control tasks with only episodic returns and demonstrated the effectiveness of our algorithm.


Multiple Policy Value Monte Carlo Tree Search

arXiv.org Artificial Intelligence

Many of the strongest game playing programs use a combination of Monte Carlo tree search (MCTS) and deep neural networks (DNN), where the DNNs are used as policy or value evaluators. Given a limited budget, such as online playing or during the self-play phase of AlphaZero (AZ) training, a balance needs to be reached between accurate state estimation and more MCTS simulations, both of which are critical for a strong game playing agent. Typically, larger DNNs are better at generalization and accurate evaluation, while smaller DNNs are less costly, and therefore can lead to more MCTS simulations and bigger search trees with the same budget. This paper introduces a new method called the multiple policy value MCTS (MPV-MCTS), which combines multiple policy value neural networks (PV-NNs) of various sizes to retain advantages of each network, where two PV-NNs f_S and f_L are used in this paper. We show through experiments on the game NoGo that a combined f_S and f_L MPV-MCTS outperforms single PV-NN with policy value MCTS, called PV-MCTS. Additionally, MPV-MCTS also outperforms PV-MCTS for AZ training.


Solving NP-Hard Problems on Graphs by Reinforcement Learning without Domain Knowledge

arXiv.org Machine Learning

We propose an algorithm based on reinforcement learning for solving NP-hard problems on graphs. We combine Graph Isomorphism Networks and the Monte-Carlo Tree Search, which was originally used for game searches, for solving combinatorial optimization on graphs. Similarly to AlphaGo Zero, our method does not require any problem-specific knowledge or labeled datasets (exact solutions), which are difficult to calculate in principle. We show that our method, which is trained by generated random graphs, successfully finds near-optimal solutions for the Maximum Independent Set problem on citation networks. Experiments illustrate that the performance of our method is comparable to SOTA solvers, but we do not require any problem-specific reduction rules, which is highly desirable in practice since collecting hand-crafted reduction rules is costly and not adaptive for a wide range of problems.


SAI: a Sensible Artificial Intelligence that plays with handicap and targets high scores in 9x9 Go (extended version)

arXiv.org Artificial Intelligence

We develop a new model that can be applied to any perfect information two-player zero-sum game to target a high score, and thus a perfect play. We integrate this model into the Monte Carlo tree search-policy iteration learning pipeline introduced by Google DeepMind with AlphaGo. Training this model on 9x9 Go produces a superhuman Go player, thus proving that it is stable and robust. We show that this model can be used to effectively play with both positional and score handicap. We develop a family of agents that can target high scores against any opponent, and recover from very severe disadvantage against weak opponents. To the best of our knowledge, these are the first effective achievements in this direction.