Email firstname.lastname@example.org to subscribe to the weekly email alert.
Over the last few years, the quest to build fully autonomous vehicles has shifted into high gear. Yet, despite huge advances in both the sensors and artificial intelligence (AI) required to operate these cars, one thing has so far proved elusive: developing algorithms that can accurately and consistently identify objects, movements, and road conditions. As Mathew Monfort, a postdoctoral associate and researcher at the Massachusetts Institute of Technology (MIT) puts it: "An autonomous vehicle must actually function in the real world. However, it's extremely difficult and expensive to drive actual cars around to collect all the data necessary to make the technology completely reliable and safe." All of this is leading researchers down a different path: the use of game simulations and machine learning to build better algorithms and smarter vehicles.
Hedge fund Renaissance Technologies is looked upon by Wall Street with awe and envy in equal measure. Particularly, Medallion Fund, an employees only fund it runs. Bloomberg last year wrote the fund has returned more than $55 billion, making it more profitable than funds run by feted veterans such as George Soros. The Renaissance flagship fund, which will turn 30 next year, has returned more than 25% profits in most of its years of investing. Money doubles in a little more than three years at that rate.
Over the course of this blog post, I will first contrast transfer learning with machine learning's most pervasive and successful paradigm, supervised learning. According to Andrew Ng, transfer learning will become a key driver of Machine Learning success in industry. Fuelled by advances in Deep Learning, more capable computing utilities, and large labeled datasets, supervised learning has been largely responsible for the wave of renewed interest in AI, funding rounds and acquisitions, and in particular the applications of machine learning that we have seen in recent years and that have become part of our daily lives. Even more so as transfer learning currently receives relatively little visibility compared to other areas of machine learning such as unsupervised learning and reinforcement learning, which have come to enjoy increasing popularity: Unsupervised learning -- the key ingredient on the quest to General AI according to Yann LeCun as can be seen in Figure 5 -- has seen a resurgence of interest, driven in particular by Generative Adversarial Networks.
Included below is a version of the talk in blog post form.1 This talk is about a new research agenda aimed at using machine learning to make AI systems safe even at very high capability levels. A task-directed AI system is a system that pursues a semi-concrete objective in the world, like "build a million houses" or "cure cancer." We'll model more advanced AI systems by just supposing that systems will continue to achieve higher scores in ML tasks. Suppose an AI system composes a story, and a human gives the system a reward based on how good the story is.4 This is similar to some RL tasks: the agent wants to do something that will cause it to receive a high reward in the future.
Sports analytics is routinely used to assign values to such things as shots taken or to compare player performance, but a new automated method based on deep learning techniques - developed by researchers at Disney Research, California Institute of Technology and STATS, a supplier of sports data - will provide coaches and teams with a quicker tool to help assess defensive athletic performance in any game situation. The innovative method analyzes detailed game data on player and ball positions to create models, or "ghosts," of how a typical player in a league or on another team would behave when an opponent is on the attack. "With the innovation of data-driven ghosting, we can now, for the first time, scalably quantify, analyze and compare detailed defensive behavior," said Peter Carr, research scientist at Disney Research. The researchers used a type of machine learning called deep learning, which uses brain-inspired programs called neural networks.