Go



Google's Artificial Intelligence Destroyed the World's Best Go Player. Then He Gave This Extraordinary Response

#artificialintelligence

It was billed as a battle of human intelligence versus artificial intelligence, man versus machine. Just over a month ago, a Google computer program named AlphaGo competed against 19-year-old Chinese prodigy Ke Jie, the top-ranked player of what is believed to be the world's most sophisticated board game, Go. I see this as a remarkable example of emotional intelligence (EI), the ability to make emotions work for you instead of against you. It's about cultivating a mindset of continuous growth, continuing the journey of self-improvement.


In Edmonton, companies find a humble hub for artificial intelligence

#artificialintelligence

It's there you'll find the professors who solved the game of checkers, beat a top human player in the game of Go and used cutting-edge artificial intelligence to outsmart a handful of professional poker players for the very first time. He's a pioneer in a branch of artificial intelligence research known as reinforcement learning -- the computer science equivalent of treat-training a dog, except in this case the dog is an algorithm that's been incentivized to behave in a certain way. U of A computing science professors and artificial intelligence researchers (left to right) Richard Sutton, Michael Bowling and Patrick Pilarski are working with Google's DeepMind to open the AI company's first research lab outside the U.K., in Edmonton. Last week, Google's AI subsidiary DeepMind announced it was opening its first international office in Edmonton, where Sutton -- alongside professors Michael Bowling and Patrick Pilarski -- will work part-time.


Thinking of doing a machine learning PhD? Read this first.

#artificialintelligence

It was a machine learning technique called deep learning, which is loosely inspired by the network structure of our brains. Machine learning's successes are not limited to game playing – it's been succeeding at many tasks including driving, language translation, and speech recognition. Now, everyone wants to get in on the machine learning action; the field has become wildly popular in recent years. So, if you have a quantitative background (not necessarily in computer science), and want to have a positive impact on the world, we think machine learning is one of the best PhD programmes.


First DeepMind AI conquered Go. Now it's time to stop playing games

#artificialintelligence

DeepMind's AlphaGo artificial intelligence shut out the world's best Go player, 19-year-old Ke Jie, ending their series at 3-0 in late May. For the same reason, DeepMind probably won't teach a machine to play Arimaa, a board game developed with the specific purpose of being difficult for machines to play. From Deep Blue facing Kasparov, to AlphaGo squaring up to Ke Jie, there have always been detractors who have claimed that the computer players have been programmed with a specific opponent in mind. In DeepMind's blog post officially announcing AlphaGo's retirement from competitive play, Hassabis and Silver noted that the team behind the technology is moving on to algorithms that could help with tasks like "finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials."


Seeing AI as a tool that intrinsically benefits humanity- Nikkei Asian Review

#artificialintelligence

Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGo, speaks during the Future of Go summit in Wuzhen, China on May 25. (Photo by Joshua Ogawa) Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGo, speaks during the Future of Go summit in Wuzhen, China on May 25. Naysayers would argue that developing AI this way will be difficult, due to the black box involved in AI's deep learning process. The Japanese Society for Artificial Intelligence expressed similar views in the September 2016 edition of its journal, with what it calls humane artificial intelligence, and in February published ethical guidelines for member AI researchers.


Interview: Human brain is entirely 'computable' -- AlphaGo developer Hassabis- Nikkei Asian Review

#artificialintelligence

To help in very important areas of the world: climate, disease and other areas of science -- chemistry, biology, materials science -- to advance the world for the benefit of everyone. So, we need to analyze how to build those systems in the right way -- to make them like tools -- and then build other tools, like visualization tools or interpretability tools, to understand how the system is working and making its decisions. Demis Hassabis, left, stands with the world's top Go player Ke Jie, center, and Eric Schmidt, executive chairman of Alphabet, which owns Google and DeepMind, on May 23. (Courtesy of Google) Demis Hassabis, left, stands with the world's top Go player Ke Jie, center, and Eric Schmidt, executive chairman of Alphabet, which owns Google and DeepMind, on May 23.


Google's DeepMind Is Teaching AI How to Think Like a Human

#artificialintelligence

Last year, for the first time, an artificial intelligence called AlphaGo beat the ranking human champion in a game of Go. While AlphaGo's victory was certainly impressive, this artificial intelligence, which has since beat a number of other Go champions, is still considered "narrow" AI--that is, a type of artificial intelligence that can only outperform a human in a very limited domain of tasks. In contrast, the AI often depicted in science fiction is called "general" artificial intelligence, which means that it has the same level and diversity of intelligence as a human. In the second test, DeepMind researchers created a neural net called the Visual Interaction Network (VIN), which it trained to predict the future states of an object in a video based on its past motion.


Forget AlphaGo, DeepMind has a more interesting step toward general AI

#artificialintelligence

In two papers published this week and reported by New Scientist, researchers at the Alphabet subsidiary describe efforts to teach computers about relational reasoning, a cognitive capability that is foundational to human intelligence. The two systems developed at DeepMind solve that by modifying existing machine-learning methods to make them capable of learning about physical relationships between static objects, as well as the behavior of moving objects over time. In the second paper, the researchers show how a similarly modified machine-learning system can learn to predict the behavior of simple objects in two dimensions. Without new ideas, AI systems will remain incapable of things like holding a real conversation or solving difficult problems on their own.


Forget AlphaGo, DeepMind has a more interesting step toward general AI

#artificialintelligence

In two papers published this week and reported by New Scientist, researchers at the Alphabet subsidiary describe efforts to teach computers about relational reasoning, a cognitive capability that is foundational to human intelligence. The two systems developed at DeepMind solve that by modifying existing machine-learning methods to make them capable of learning about physical relationships between static objects, as well as the behavior of moving objects over time. In the second paper, the researchers show how a similarly modified machine-learning system can learn to predict the behavior of simple objects in two dimensions. Without new ideas, AI systems will remain incapable of things like holding a real conversation or solving difficult problems on their own.