'Deep Learning' Will Soon Give Us Super-Smart Robots

AITopics Original Links

Yann LeCun is among those bringing a new level of artificial intelligence to popular internet services from the likes of Facebook, Google, and Microsoft. As the head of AI research at Facebook, LeCun oversees the creation of vast "neural networks" that can recognize photos and respond to everyday human language. And similar work is driving speech recognition on Google's Android phones, instant language translation on Microsoft's Skype service, and so many other online tools that can "learn" over time. Using vast networks of computer processors, these systems approximate the networks of neurons inside the human brain, and in some ways, they can outperform humans themselves. This week in the scientific journal Nature, LeCun--also a professor of computer science at New York University--details the current state of this "deep learning" technology in a paper penned alongside the two other academics most responsible for this movement: University of Toronto professor Geoff Hinton, who's now at Google, and the University of Montreal's Yoshua Bengio.


An Open-Source Framework for Adaptive Traffic Signal Control

arXiv.org Artificial Intelligence

Developing optimal transportation control systems at the appropriate scale can be difficult as cities' transportation systems can be large, complex and stochastic. Intersection traffic signal controllers are an important element of modern transportation infrastructure where sub-optimal control policies can incur high costs to many users. Many adaptive traffic signal controllers have been proposed by the community but research is lacking regarding their relative performance difference - which adaptive traffic signal controller is best remains an open question. This research contributes a framework for developing and evaluating different adaptive traffic signal controller models in simulation - both learning and non-learning - and demonstrates its capabilities. The framework is used to first, investigate the performance variance of the modelled adaptive traffic signal controllers with respect to their hyperparameters and second, analyze the performance differences between controllers with optimal hyperparameters. The proposed framework contains implementations of some of the most popular adaptive traffic signal controllers from the literature; Webster's, Max-pressure and Self-Organizing Traffic Lights, along with deep Q-network and deep deterministic policy gradient reinforcement learning controllers. This framework will aid researchers by accelerating their work from a common starting point, allowing them to generate results faster with less effort.


NVIDIA Research Takes NeurIPS Attendees on AI Road Trip NVIDIA Blog

#artificialintelligence

Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro -- all imagined by AI. We've introduced at this week's NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture. The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.


Etalumis 'Reverses' Simulations to Reveal New Science

#artificialintelligence

Scientists have built simulations to help explain behavior in the real world, including modeling for disease transmission and prevention, autonomous vehicles, climate science, and in the search for the fundamental secrets of the universe. But how to interpret vast volumes of experimental data in terms of these detailed simulations remains a key challenge. Probabilistic programming offers a solution--essentially reverse-engineering the simulation--but this technique has long been limited due to the need to rewrite the simulation in custom computer languages, plus the intense computing power required. To address this challenge, a multinational collaboration of researchers using computing resources at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) has developed the first probabilistic programming framework capable of controlling existing simulators and running at large-scale on HPC platforms. The system, called Etalumis ("simulate" spelled backwards), was developed by a group of scientists from the University of Oxford, University of British Columbia (UBC), Intel, New York University, CERN, and NERSC as part of a Big Data Center project.


An AI Pioneer Wants His Algorithms to Understand the 'Why'

#artificialintelligence

In March, Yoshua Bengio received a share of the Turing Award, the highest accolade in computer science, for contributions to the development of deep learning--the technique that triggered a renaissance in artificial intelligence, leading to advances in self-driving cars, real-time speech translation, and facial recognition. Now, Bengio says deep learning needs to be fixed. He believes it won't realize its full potential, and won't deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen. The 55-year-old professor at the University of Montreal, who sports bushy gray hair and eyebrows, says deep learning works well in idealized situations but won't come close to replicating human intelligence without being able to reason about causal relationships.