'Deep Learning' Will Soon Give Us Super-Smart Robots

AITopics Original Links

Yann LeCun is among those bringing a new level of artificial intelligence to popular internet services from the likes of Facebook, Google, and Microsoft. As the head of AI research at Facebook, LeCun oversees the creation of vast "neural networks" that can recognize photos and respond to everyday human language. And similar work is driving speech recognition on Google's Android phones, instant language translation on Microsoft's Skype service, and so many other online tools that can "learn" over time. Using vast networks of computer processors, these systems approximate the networks of neurons inside the human brain, and in some ways, they can outperform humans themselves. This week in the scientific journal Nature, LeCun--also a professor of computer science at New York University--details the current state of this "deep learning" technology in a paper penned alongside the two other academics most responsible for this movement: University of Toronto professor Geoff Hinton, who's now at Google, and the University of Montreal's Yoshua Bengio.


Etalumis 'Reverses' Simulations to Reveal New Science

#artificialintelligence

Scientists have built simulations to help explain behavior in the real world, including modeling for disease transmission and prevention, autonomous vehicles, climate science, and in the search for the fundamental secrets of the universe. But how to interpret vast volumes of experimental data in terms of these detailed simulations remains a key challenge. Probabilistic programming offers a solution--essentially reverse-engineering the simulation--but this technique has long been limited due to the need to rewrite the simulation in custom computer languages, plus the intense computing power required. To address this challenge, a multinational collaboration of researchers using computing resources at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) has developed the first probabilistic programming framework capable of controlling existing simulators and running at large-scale on HPC platforms. The system, called Etalumis ("simulate" spelled backwards), was developed by a group of scientists from the University of Oxford, University of British Columbia (UBC), Intel, New York University, CERN, and NERSC as part of a Big Data Center project.


NVIDIA Research Takes NeurIPS Attendees on AI Road Trip NVIDIA Blog

#artificialintelligence

Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro -- all imagined by AI. We've introduced at this week's NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture. The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.


'Godfather' of deep learning is reimagining AI

#artificialintelligence

Geoffrey Hinton may be the "godfather" of deep learning, a suddenly hot field of artificial intelligence, or AI โ€“ but that doesn't mean he's resting on his algorithms. Hinton, a University Professor Emeritus at the University of Toronto, recently released two new papers that promise to improve the way machines understand the world through images or video โ€“ a technology with applications ranging from self-driving cars to making medical diagnoses. "This is a much more robust way to detect objects than what we have at present," Hinton, who is also a fellow at Google's AI research arm, said today at a tech conference in Toronto. "If you've been in the field for a long time like I have, you know that the neural nets that we use now โ€“ there's nothing special about them. We just sort of made them up."


An AI Pioneer Wants His Algorithms to Understand the 'Why'

#artificialintelligence

In March, Yoshua Bengio received a share of the Turing Award, the highest accolade in computer science, for contributions to the development of deep learning--the technique that triggered a renaissance in artificial intelligence, leading to advances in self-driving cars, real-time speech translation, and facial recognition. Now, Bengio says deep learning needs to be fixed. He believes it won't realize its full potential, and won't deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen. The 55-year-old professor at the University of Montreal, who sports bushy gray hair and eyebrows, says deep learning works well in idealized situations but won't come close to replicating human intelligence without being able to reason about causal relationships.