NVIDIA Research Takes NeurIPS Attendees on AI Road Trip NVIDIA Blog

#artificialintelligence

Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro -- all imagined by AI. We've introduced at this week's NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture. The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.


Etalumis 'Reverses' Simulations to Reveal New Science

#artificialintelligence

Scientists have built simulations to help explain behavior in the real world, including modeling for disease transmission and prevention, autonomous vehicles, climate science, and in the search for the fundamental secrets of the universe. But how to interpret vast volumes of experimental data in terms of these detailed simulations remains a key challenge. Probabilistic programming offers a solution--essentially reverse-engineering the simulation--but this technique has long been limited due to the need to rewrite the simulation in custom computer languages, plus the intense computing power required. To address this challenge, a multinational collaboration of researchers using computing resources at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) has developed the first probabilistic programming framework capable of controlling existing simulators and running at large-scale on HPC platforms. The system, called Etalumis ("simulate" spelled backwards), was developed by a group of scientists from the University of Oxford, University of British Columbia (UBC), Intel, New York University, CERN, and NERSC as part of a Big Data Center project.


'Godfather' of deep learning is reimagining AI

#artificialintelligence

Geoffrey Hinton may be the "godfather" of deep learning, a suddenly hot field of artificial intelligence, or AI โ€“ but that doesn't mean he's resting on his algorithms. Hinton, a University Professor Emeritus at the University of Toronto, recently released two new papers that promise to improve the way machines understand the world through images or video โ€“ a technology with applications ranging from self-driving cars to making medical diagnoses. "This is a much more robust way to detect objects than what we have at present," Hinton, who is also a fellow at Google's AI research arm, said today at a tech conference in Toronto. "If you've been in the field for a long time like I have, you know that the neural nets that we use now โ€“ there's nothing special about them. We just sort of made them up."


An AI Pioneer Wants His Algorithms to Understand the 'Why'

#artificialintelligence

In March, Yoshua Bengio received a share of the Turing Award, the highest accolade in computer science, for contributions to the development of deep learning--the technique that triggered a renaissance in artificial intelligence, leading to advances in self-driving cars, real-time speech translation, and facial recognition. Now, Bengio says deep learning needs to be fixed. He believes it won't realize its full potential, and won't deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen. The 55-year-old professor at the University of Montreal, who sports bushy gray hair and eyebrows, says deep learning works well in idealized situations but won't come close to replicating human intelligence without being able to reason about causal relationships.


Scientists slash computations for deep learning: 'Hashing' can eliminate more than 95 percent of computations

#artificialintelligence

"This applies to any deep-learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be," said lead researcher Anshumali Shrivastava, an assistant professor of computer science at Rice. The research will be presented in August at the KDD 2017 conference in Halifax, Nova Scotia. It addresses one of the biggest issues facing tech giants like Google, Facebook and Microsoft as they race to build, train and deploy massive deep-learning networks for a growing body of products as diverse as self-driving cars, language translators and intelligent replies to emails. Shrivastava and Rice graduate student Ryan Spring have shown that techniques from "hashing," a tried-and-true data-indexing method, can be adapted to dramatically reduce the computational overhead for deep learning. Hashing involves the use of smart hash functions that convert data into manageable small numbers called hashes.