Go


Understanding AlphaGo: how AI thinks and learns (Advanced)

#artificialintelligence

"It was the worst possible time, Everyone else was doing something different." In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts created computational models based on math algorithms called Threshold Logic Unit (TLU) to describe how neurons might work. Simulations of neural networks were possible until computers became more advanced in the 1950s. Before the 2000s it was considered one of the worst areas of research. LeCun and Hinton variously mentioned how in this period their papers were routinely rejected from being published due to their subject being neural networks.


The differences between Artificial and Biological Neural Networks

#artificialintelligence

Although artificial neurons and perceptrons were inspired by the biological processes scientists were able to observe in the brain back in the 50s, they do differ from their biological counterparts in several ways. Birds have inspired flight and horses have inspired locomotives and cars, yet none of today's transportation vehicles resemble metal skeletons of living-breathing-self replicating animals. Still, our limited machines are even more powerful in their own domains (thus, more useful to us humans), than their animal "ancestors" could ever be. It is easy to draw the wrong conclusions from the possibilities in AI research by anthropomorphizing Deep Neural Networks, but artificial and biological neurons do differ in more ways than just the materials of their containers. The idea behind perceptrons (the predecessors to artificial neurons) is that it is possible to mimic certain parts of neurons, such as dendrites, cell bodies and axons using simplified mathematical models of what limited knowledge we have on their inner workings: signals can be received from dendrites, and sent down the axon once enough signals were received.


Tech's Biggest Leaps From the Last 10 Years, and Why They Matter

#artificialintelligence

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years. The 2010s saw IBM's Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind's AlphaGO becoming the world's Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online. For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.


AlphaGo Netflix

#artificialintelligence

In this sweeping romance, an American woman (Zoe Saldana) loves and loses a Sicilian man she meets in Italy. Based on Tembi Locke's best-selling memoir.


Does AlphaGo actually play Go? Concerning the State Space of Artificial Intelligence

arXiv.org Artificial Intelligence

The overarching goal of this paper is to develop a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another main dimension lies in the possibility to go over from specific to more general types of problems. The third main dimension is provided by semantic grounding. Since this is a philosophically complex and controversial dimension, a larger part of the paper is devoted to it. We take a fresh look at known foundational arguments in the philosophy of mind and cognition that are gaining new relevance in view of the recent AI developments including the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and general use-theoretic considerations of meaning. Finally, the AI state space, spanned by the main dimensions generalization, grounding and "selfx-ness", possessing self-x properties such as self-learning, is outlined.


A researcher in Japan designed an AI program for Othello that always loses to human players

Daily Mail - Science & tech

A new online version of the game Othello has become a hit in Japan because the AI has been designed to always lose, and players love it. The game, called'The weakest AI Othello,' was released in August and has since attracted over 400,000 players for more than 1.29 million games. It was developed by Takuma Yoshida, who works at Avilen,a Tokyo firm that designs AI and machine learning tools for businesses. 'The Weakest AI Othello' is an online version of the popular board game, in which the computer AI has been designed to always lose to the human player One day at work, Yoshida began to question why he was spending so much time trying to engineer software to outperform humans. He wondered whether human attitudes toward AI and robotics might be different if humans didn't always expect to be beaten by them, according to a report in the Asahi Shimbun.


Artificial intelligence, what to expect in its field's future

#artificialintelligence

The world of Go, the traditional Chinese board game, has recently been shaken by the introduction of an elite player -- AI, developed by Google. AlphaGo, the program name of the player AI, has only been defeated once since its introduction in 2016, by Go master Lee Se-dol, according to an article by Vice News. Once defeated, Google trained a new version of the program by making it play against the original AI. When AlphaGo 2.0 beat its predecessor 100 times in a row, it was deemed sufficiently improved. Faced with a brand new insurmountable foe, Lee, considered one of the best human Go players in the world, retired.



Go World Champion Retires After Realizing He Can't Beat AI Hack News

#artificialintelligence

Go is one of the most complex abstract strategy games which involves surrounding more territory than your opponent to win the game. Lee Sedol is the world champion of the Go game and has bagged #2 position in international titles. In March 2016, Lee Sedol competed against Google's AI-based AlphaGo program and lost four out of five matches to the AI. Sedol was shocked after the defeat and said, "I don't know how to start or what to say today, but I think I would have to express my apologies first. I do apologize for not being able to satisfy a lot of people's expectations. I kind of felt powerless."


Why The Retirement Of Lee Se-Dol, Former 'Go' Champion, Is A Sign Of Things To Come

#artificialintelligence

South Korean professional Go player Lee Se-Dol after the match against Google's artificial ... [ ] intelligence program, AlphaGo on March 10, 2016 in Seoul, South Korea. In May 1997, IBM's Deep Blue supercomputer defeated the reigning world chess champion, Garry Kasparov, in an official match under tournament conditions. Fast forward to 2011, IBM extended development in machine learning, natural language processing, and information retrieval to build Watson, a system capable of defeating two highly decorated Jeopardy champions: Brad Rutter and Ken Jennings. The progress of gaming innovation in the field of artificial intelligence was swift, but it wasn't until the introduction of Google DeepMind's AlphaGo in 2016 that things started to change dramatically. The AlphaGo supercomputer tackled the notion that Go, an ancient Chinese board game invented thousands of years ago, was unsolvable due to a near limitless combination of moves that a player can execute.