Leisure & Entertainment

AlphaGo is No. 1 Go player, marking AI's power over human mind


AlphaGo, an artificial intelligence (AI) programme developed in 2014 by the DeepMind lab of the world's largest internet search engine Google, vanquished China's Ke Jie, the top player of the game of Go in all three matches this week in Wuzhen in Zhejiang province. The victories vindicate Google's effort to move from internet search and organising the world's information to artificial intelligence and machine learning, in its shift from a "mobile first" world into an AI-led world. Chinese technology companies including Alibaba Group Holding, Tencent Holdings and Baidu Inc, have all been pushing hard into AI as they see the technology as the new frontier to transform existing industries and create new services and products. New technology increases daily productivity and opens up countless opportunities for businesses, especially in the fields of healthcare, transportation, and government, said Eric Schmidt, the executive chairman of Alphabet, Google's parent firm, in Wuzhen this week.

World's top weiqi player Ke Jie loses third match against AlphaGo


The world's No.1 weiqi (Go) player Ke Jie lost the contest against his artificial intelligence (AI) rival, AlphaGo, in the third and also final match of the summit on Saturday. Just like Ke, Nie thinks AlphaGo is much stronger than any human player. During the endgame, AlphaGo chose to retreat in all places around the board, giving Ke some territory to keep the situation stable, as the machine was already confident about winning. Stay with us to find out what the world's best human player thinks about his three matches with AlphaGo.

Google AI AlphaGo wins again, leaves humans in the dust


Human champion Ke Jie competes against AlphaGo at the Future of Go Summit. Two days ago in the Zhejiang Province of China, Google's Go-playing artificial intelligence AlphaGo bested current world Go champion Ke Jie in the first game of a three-part match, sliding by on a half-point victory. "According to #AlphaGo evaluations Ke Jie is playing perfectly at the moment." "AlphaGo wins game 2," he said.

Deep learning algorithms demand nearly limitless supplies of data


"We need to get more data," said Patrick Lucey, director of data science at sports consulting company STATS LLC in Chicago. For example, he and his team developed a model that looks at video data from NBA games and analyzes players' body positions to better define what an open shot looks like. Another STATS project applied deep learning algorithms to English Premier League soccer. The data science team at STATS primarily builds models in open source tools, such as the Google-created TensorFlow and scikit-learn, a library of machine learning models built in Python.

Google's AlphaGo Trounces Humans--But It Also Gives Them a Boost


Over the next two years, it evolved into a far more complex AI capable of beating the world's top players--nine dan professional grandmasters like Ke Jie, who has lost two straight games against the machine this week at a match in China. Given that the top Go players rely so heavily on intuition when playing this enormously complex game--a very human talent--AlphaGo marks a turning point in the progress of artificial intelligence. The event in China also included a "pair Go" match where the machine played alongside grandmasters rather than against them. "In some cases," grandmaster Gu Li said after a pair game alongside AlphaGo, "I could not follow in his footsteps."

Google is reportedly launching yet another venture group to invest in AI


Google has established a new organization to invest in artificial intelligence (AI) startups, according to a new report. The new effort shows Google taking its experience with venture capital and applying it to AI, a type of computing that it has been increasingly using across its applications. The new organization will be separate from Google parent company Alphabet's funding activity within GV (formerly Google Ventures) and CapitalG (formerly Google Capital), Axios reported on Friday. Google has dedicated AI research groups including DeepMind, whose AlphaGo AI Go player earlier this week beat the top human Go player, Ke Jie.

Curious AI learns by exploring game worlds and making mistakes

New Scientist

So, rather than looking for a reward in the game world, the algorithm was rewarded for exploring and mastering skills that led to it discovering more about the world. This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google's AI company DeepMind. Its algorithm learned much more quickly than conventional reinforcement learning approaches. Imbibed with a sense of curiosity, Pathak's own AI learnt to stomp on enemies and jump over pits in Mario and also learned to explore faraway rooms and walk down hallways in another game similar to Doom.

The AI fight is escalating: This is the IT giants' next move


Businesses can dabble on the edges of these, for example developing Alexa "skills" that allow Amazon Echo owners to interact with a company without having to dial its call center, or jump right in, using the various cloud-based speech recognition and text-to-speech "-as-a-service" offerings to develop full-fledged automated call centers of their own. At Build in early May, it offered production versions of services previously only available in preview, including a face-tagging API and an automated Content Moderator that can approve or block text, images and videos, forwarding difficult cases to humans for review. The systems are available either already trained for particular tasks or as blank slates that can be trained on your data, and include image, text and video analysis, speech recognition and translation. This offers integrations with Amazon's speech recognition and understanding services, allowing businesses to create more sophisticated interactive voice-response (IVR) systems.

Truly intelligent enemies could change the face of gaming


Ditching mind control in favor of a strongman cult creates a more fertile arena for dynamic relationships to develop between players and their erstwhile allies and enemies, according to Michael de Plater, creative director for Shadow of War at Monolith. The system ended up creating memorable enemies that players would recall years later on Reddit forums, de Plater said: They'd kill named Orc lieutenants the system assigned a random personality and set of attributes, who would return with a grudge and sometimes kill the player, creating a brutal cycle with both parties knowing the score. "What we do is build tools to help developers creatively author story scenarios and author personalities for characters and the kinds of things that characters might say, but then those characters might improvise based on the space that you've authored for them," Khandaker told Engadget. What Shadow of War won't have are human enemies that players can mind control or kill in gruesome ways: Your foes will be Mordor-born Orcs who span the gray-brown gamut and exhibit the violent, traitorous ways of their race.

20 years after Deep Blue, a new era in human-machine collaboration


On May 11, 1997, an IBM computer called Deep Blue defeated the reigning world chess champion, Garry Kasparov, capturing the attention and imagination of the world. Distinguished IBM Research Staff Member Murray Campbell, one of the original developers of Deep Blue, looks back at the match and explains how AI has evolved over the last 20 years to embody augmented intelligence.