"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
Authors: Tero Karras (NVIDIA) Samuli Laine (NVIDIA) Timo Aila (NVIDIA) Abstract: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
Google Assistant hasn't been traveling, but it has picked up some new accents. The voice assistant now has the ability to speak in an Australian or English accent (though Google calls it British). The feature is available across all devices including Android phones and Google Home speakers, but only for English speakers in the US for the time being. In order to produce the accents in an accurate way, Google is tapping into the artificial intelligence of DeepMind. Google Assistant uses WaveNet, the AI company's speech synthesis model powered by deep neural networks, to generate natural sounding voices.
The vastness of the archival chemistry literature is both a blessing and a curse. The reaction that you're looking for is probably in there, provided you take enough time to search for it. Gao et al. trained a neural network model on 10 million known reactions to speed up this process. Specifically, the model was charged with predicting a catalyst, reagents, solvents, and temperature to achieve a given transformation. When tested, the model's top-10 list of suggestions produced a close match to actual conditions nearly 70% of the time, with a 20 C error margin in temperature.
With the recent progress in artificial intelligence (AI) algorithms, dramatic increase in computational capacities, and availability of big data necessary for training deep neural networks, a lot of AI applications became available at the market and automation tendencies started to penetrate all spheres of human activities and all industries. While the topic of AI has been getting a lot of media coverage and public attention, profound research on its socio-economic and policy effects, especially with regard to entrepreneurship, has yet to be developed. Moreover, methodological papers in artificial intelligence field have been mainly published in very technical venues and it is difficult for a broader publics to grasp the most recent developments in this area. Therefore, the purpose of this special issue is to address these shortcomings. This special issue is the first initiative to interact the technical and methodological papers in AI with papers exploring socio-economic, entrepreneurship and policy effects of AI.
David Duvenaud was working on a project involving medical data when he hit upon a major shortcoming in AI. An AI researcher at the University of Toronto, he wanted to build a deep-learning model that would predict a patient's health over time. But data from medical records is kind of messy: throughout your life, you might visit the doctor at different times for different reasons, generating a smattering of measurements at arbitrary intervals. A traditional neural network struggles to handle this. Its design requires it to learn from data with clear stages of observation.
Don't worry, they only look like the Pokemon of your nightmares. The images you are about to see are, in fact, at the very bleeding edge of machine-generated imagery, mixed with collaborative human-AI production by artist Alex Reben and a little help from some anonymous Chinese artists. Reben's latest work, dubbed AmalGAN, is derived from Google's BigGAN image-generation engine. Like other GANs (generative adversarial networks), BigGAN uses a pair of competing AI: one to randomly generate images, the other to grade said images based on how close they are to the training material. However, unlike previous iterations of image generators, BigGAN is backed by Google's mammoth computing power and uses that capability to create incredibly lifelike images.
You might not see most objects in near-total darkness, but AI can. MIT scientists have developed a technique that uses a deep neural network to spot objects in extremely low light. The team trained the network to look for transparent patterns in dark images by feeding it 10,000 purposefully dark, grainy and out-of-focus pictures as well as the patterns those pictures are supposed to represent. The strategy not only gave the neural network an idea of what to expect, but highlighted hidden transparent objects by producing ripples in what little light was present. The researchers countered the blurring by giving it a physics lesson -- it knew how a defocused camera could produce blurring effects.
In the early 1970s, a British grad student named Geoff Hinton began to make simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained an impractical technology for decades. But in 2012, Hinton and two of his grad students at the University of Toronto used them to deliver a big jump in the accuracy with which computers could recognize objects in photos. Within six months, Google had acquired a startup founded by the three researchers. Previously obscure, artificial neural networks were the talk of Silicon Valley.
AI and deep learning are invading the enterprise. NVIDIA Corporation is in the midst of an unprecedented run, delivering targeted technology and products that enable companies to learn from their data. These learnings can lead to competitive insights, recognizing new trends, fueling control systems for intelligent infrastructure, or simply providing predictive capabilities to better manage the business. The challenge in deploying these systems is one of balance. Storage in the datacenter has evolved to service the needs of mainstream business applications, not highly-parallel deep learning systems.