For example, when Google DeepMind's AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI.
For example, a major scientific achievement in computing this past year featured reinforcement-learning programs by a computer that outperformed humans in AlphaGo, a massively complex game that build on learning algorithms, modeled after the ancient board game, Go. These DeepMind neural networks function differently than other AI platforms such as IBM's Deep Blue program or Watson, which were assembled from large databases and developed for a pre-defined purpose, and which only function within its scope. Whereas computer programs had previously used Monte Carlo tree search algorithm strategies (search engine programs designed to instruct plays in computer games) to find its moves in the game – like a computer using a database of options programmed by a human in order to select the next proper chess move. However, in these recent advances, the Google Deep Mind programs that operate AlphaGo are able to select moves based on knowledge the programs learned themselves from machine learning in an artificial neural network based on human and computer play.
Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Google's AlphaGo AI over the human Go champ. Thoughtfully combining human expertise and automated functionality creates an "augmented" physician model that scales and advances the expertise of the doctor. Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work.
The words artificial intelligence (AI), machine learning (ML) and data visualization are everywhere right now. The term machine learning is meant to describe the process machines go through in order to get data and basically learn for themselves. The fact is that ML is basically allowing companies to reshape the business process by using digital intelligence. It's a form of visual organization that can help businesses and institution better understand their data, by identifying trends, correlations, and patterns – all essential elements in order to offer their clients the best experiences.
When I read today's news about OpenAI's DotA 2 bot beating human players at The International, an eSports tournament with a prize pool of over $24M, I was jumping with excitement. These games require long-term strategic decision making, multiplayer cooperation, and have significantly more complex state and action spaces than Chess, Go, or Atari, all of which have been "solved" by AI techniques over the past decades. Given that 1v1 is mostly a game of mechanical skill, it is not surprising that a bot beats human players. And given the severely restricted environment, the artificially restricted set of possible actions, and that there was little to no need for long-term planning or coordination, I come to the conclusion that this problem was actually significantly easier than beating a human champion in the game of Go.
In the particular case of the Facebook negotiation chat bot, you give it examples of negotiation dialogs with the whole situation properly annotated -- what the initial state was, the preferences of the negotiator, what was said, what the result was, etc. The program analyzes all these examples, extracts some features of each dialog, and assigns a number to these features, representing how often dialogs with that feature ended in positive results for the negotiator. AlphaGo started learning from real games played by real people. The original training data set was in English, but the extracted features were just words and phrases, and the robot was just putting them together based on the numerical representation of how likely they were going to help get the desired outcome.
But it's becoming clear that this type of dystopian view of artificial intelligence is out of phase with humans' real-world expectations and hopes for AI. During the same contest, Lee Sedol consumed approximately 400 calories (less than one bunch of broccoli), making him 4,000 times more energy efficient for a similar level of intelligence. Society is in the early, exciting phases of the next generation of artificial intelligence and machine learning, a phase where humans still are vastly more efficient computing machines and are learning to harness the machine's power to work smarter. It's quite the opposite of "2001: A Space Odyssey," where humans found they couldn't work with HAL and HAL found he couldn't work with humans.
Given a particular input, one can often predict how a person will respond. That is not the case for the most intelligent machines in our midst. The creators of AlphaGo -- a computer program built by Google's DeepMind that decisively beat the world's finest human player of the board game Go -- admitted they could not have divined its winning moves. This unpredictability, also seen in the Facebook chatbots that were shut down after developing their own language, has stirred disquiet in the field of artificial intelligence.
AlphaGo first drew headlines last year when it beat former Go world champion Lee Sedol, and the China event took things to the next level with matches against 19-year-old Jie, and doubles with and against other top Go pros. For that reason, the Future of Go Summit is our final match event with AlphaGo. DeepMind is planning to publish a final review paper on how the AI developed since its matches with Lee Sedol last year. Top players, even Ke Jie himself, studied up on AlphaGo's moves and added some to their arsenal.