Scientists have built simulations to help explain behavior in the real world, including modeling for disease transmission and prevention, autonomous vehicles, climate science, and in the search for the fundamental secrets of the universe. But how to interpret vast volumes of experimental data in terms of these detailed simulations remains a key challenge. Probabilistic programming offers a solution--essentially reverse-engineering the simulation--but this technique has long been limited due to the need to rewrite the simulation in custom computer languages, plus the intense computing power required. To address this challenge, a multinational collaboration of researchers using computing resources at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) has developed the first probabilistic programming framework capable of controlling existing simulators and running at large-scale on HPC platforms. The system, called Etalumis ("simulate" spelled backwards), was developed by a group of scientists from the University of Oxford, University of British Columbia (UBC), Intel, New York University, CERN, and NERSC as part of a Big Data Center project.
Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro -- all imagined by AI. We've introduced at this week's NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture. The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.
After years in the (mostly Canadian) wilderness followed by seven years of plenty, Deep Learning was officially recognized as the "dominant" AI paradigm and "a critical component of computing," with its three key proponents, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, receiving the Turing Award in March 2019. Turing Award winners (from left to right) Yoshua Bengio, Yann LeCun, and Geoffrey Hinton at the ... [ ] ReWork Deep Learning Summit, Montreal, October 2017. In October 2012, a deep neural network achieved an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before. Yann LeCun: "The difference there was so great that a lot of people, you could see a big switch in their head going'clunk.' Now they were convinced;" Geoffrey Hinton: "Until we could produce results that were clearly better than the current state of the art, people were very skeptical;" Yoshua Bengio: "[Anyone hoping to make the next Turing-winning breakthrough in AI] should not follow the trend--which right now is deep learning." Deep Learning is a "critical component of computing"… or biology? As customary for Turing Awards laureates, Hinton, LeCun and Bengio delivered the A. M. Turing Lecture.