Google's AlphaGo Trounces Humans--But It Also Gives Them a Boost


Over the next two years, it evolved into a far more complex AI capable of beating the world's top players--nine dan professional grandmasters like Ke Jie, who has lost two straight games against the machine this week at a match in China. Given that the top Go players rely so heavily on intuition when playing this enormously complex game--a very human talent--AlphaGo marks a turning point in the progress of artificial intelligence. The event in China also included a "pair Go" match where the machine played alongside grandmasters rather than against them. "In some cases," grandmaster Gu Li said after a pair game alongside AlphaGo, "I could not follow in his footsteps."

Curious AI learns by exploring game worlds and making mistakes

New Scientist

So, rather than looking for a reward in the game world, the algorithm was rewarded for exploring and mastering skills that led to it discovering more about the world. This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google's AI company DeepMind. Its algorithm learned much more quickly than conventional reinforcement learning approaches. Imbibed with a sense of curiosity, Pathak's own AI learnt to stomp on enemies and jump over pits in Mario and also learned to explore faraway rooms and walk down hallways in another game similar to Doom.

India Khabar Google AI defeats human Go champion


Prof Noel Sharkey, a computer scientist at Sheffield University, said it is still a long way from creating a general intelligence. Prof Nello Cristianini, from Bristol University, added: "This is machine learning in action and it proves that machines are very capable but it is not general intelligence. The types of intelligence exhibited by machines that are good at playing games are seen as very narrow. Prof Cristianini added that while competition at a gaming level is fine, it should not govern how we view our relationship with intelligent machines going forward.

Google's AlphaGo AI wins three-match series against the world's best Go player


Today, it won against Go world champion Ke Jie to clinch a second, decisive win of a three-part series that is taking place in China this week. That's despite Ke Jie playing "perfectly" at the beginning of the tie, according to AlphaGo's analysis. There's still another game to be played, but, irrespective of that result, AlphaGo has defeated the man universally acknowledged to be the best player of man's most complicated strategy game. Beyond winning showcase matches with the world's top Go players, DeepMind believes its technology has practical and everyday uses that can "solve intelligence and make the world a better place."

Daily Report: AlphaGo Wins Again


In the second match of a three-game series on Thursday, Google's DeepMind AlphaGo program beat the 19-year-old Chinese prodigy Ke Jie in the strategy board game Go. AlphaGo won the first game earlier in the week; the final game is scheduled for Saturday. But as Paul Mozur, a New York Times technology reporter, notes, AlphaGo has already proved its superiority by taking two out of three games. And AlphaGo won a four-to-one victory last year against another top Go player, South Korea's Lee Se-dol.

Why AI gets the language of games but sucks at translating languages


In a recent translation competition, human beings beat AI, but it's only a matter of time before machines become digital babel fish. In 1996, IBM's Deep Blue computer first challenged world-leading chess player Garry Kasparov. So when Google's DeepMind AlphaGo AI computer program beat Lee Sedol 4-1 in March 2016, it came as a shock. However, literary and marketing translations -- which require the target text to be almost trans-created based on target market requirements -- will continue to represent a tough challenge for even the most advanced AI machine translation solutions.

A more powerful version of AlphaGo defeated the world's number one Go player


Ke and AlphaGo will play three games at the event, which will also feature several games involving one or more players collaborating with AlphaGo. DeepMind revealed AlphaGo in early 2016, demonstrating that an AI program could teach itself to play a game that requires instinctive play and has proven resistant to conventional programming approaches. The program taught itself using reinforcement learning, a type of machine learning inspired by the way animals learn through positive feedback. In the opening game against AlphaGo, Ke Jie used one of the tactics played by the program in these games.

Google's AI can now lip read better than humans after watching thousands of hours of TV


Using related techniques, these scientists were able to create a lip-reading program called LipNet that achieved 93.4 percent accuracy in tests, compared to 52.3 percent human accuracy. By comparison, DeepMind's software -- known as "Watch, Listen, Attend, and Spell" -- was tested on far more challenging footage; transcribing natural, unscripted conversations from BBC politics shows. More than 5,000 hours of footage from TV shows including Newsnight, Question Time, and the World Today, was used to train DeepMind's "Watch, Listen, Attend, and Spell" program. The videos included 118,000 difference sentences and some 17,500 unique words, compared to LipNet's test database of video of just 51 unique words.

AlphaGo beats Ke Jie again to wrap up three-part match


AlphaGo has again defeated Ke Jie, the world's number one Go player, in their second game, meaning the AI has secured victory in the three-part match. "For the first 100 moves it was the closest we've ever seen anyone play against the Master version of AlphaGo," DeepMind CEO Demis Hassabis said in the post-game press conference. Until AlphaGo beat Lee, solving the ancient Chinese board game of Go had long been a north star for computer scientists due to its unparalleled complexity and huge number of potential moves. The final game will be on Saturday, while Friday will see AlphaGo further put to the test in two stipulation matches; one where it acts as a teammate to two Chinese pros playing each other, and another where it takes on five Chinese pros all at once.