Deep Learning


Google Assistant can now speak with an Australian or English accent

Engadget

Google Assistant hasn't been traveling, but it has picked up some new accents. The voice assistant now has the ability to speak in an Australian or English accent (though Google calls it British). The feature is available across all devices including Android phones and Google Home speakers, but only for English speakers in the US for the time being. In order to produce the accents in an accurate way, Google is tapping into the artificial intelligence of DeepMind. Google Assistant uses WaveNet, the AI company's speech synthesis model powered by deep neural networks, to generate natural sounding voices.


Comprehensive functional genomic resource and integrative model for the human brain

Science

Ranking score levels and functional categories are as in the key in (B). Highlighted ranks and terms correspond to examples shown in (D).


A radical new neural network design could overcome big challenges in AI

#artificialintelligence

David Duvenaud was working on a project involving medical data when he hit upon a major shortcoming in AI. An AI researcher at the University of Toronto, he wanted to build a deep-learning model that would predict a patient's health over time. But data from medical records is kind of messy: throughout your life, you might visit the doctor at different times for different reasons, generating a smattering of measurements at arbitrary intervals. A traditional neural network struggles to handle this. Its design requires it to learn from data with clear stages of observation.


IBM SpectrumAI Brings Scalable Storage To Deep Learning

#artificialintelligence

AI and deep learning are invading the enterprise. NVIDIA Corporation is in the midst of an unprecedented run, delivering targeted technology and products that enable companies to learn from their data. These learnings can lead to competitive insights, recognizing new trends, fueling control systems for intelligent infrastructure, or simply providing predictive capabilities to better manage the business. The challenge in deploying these systems is one of balance. Storage in the datacenter has evolved to service the needs of mainstream business applications, not highly-parallel deep learning systems.


Deep-learning technique reveals "invisible" objects in the dark

MIT News

Small imperfections in a wine glass or tiny creases in a contact lens can be tricky to make out, even in good light. In almost total darkness, images of such transparent features or objects are nearly impossible to decipher. But now, engineers at MIT have developed a technique that can reveal these "invisible" objects, in the dark. In a study published today in Physical Review Letters, the researchers reconstructed transparent objects from images of those objects, taken in almost pitch-black conditions. They did this using a "deep neural network," a machine-learning technique that involves training a computer to associate certain inputs with specific outputs -- in this case, dark, grainy images of transparent objects and the objects themselves.


Deep Learning for Computer Vision - MissingLink

#artificialintelligence

Training data is your most valuable asset, so why manage it with a file system? By managing data in a version-aware data store, MissingLink eliminates the need to copy files and only syncs changes to the data. The result is reduced load time and easy data exploration.


Deep learning in satellite imagery

#artificialintelligence

In this article, I hope to inspire you to start exploring satellite imagery datasets. Recently, this technology has gained huge momentum, and we are finding that new possibilities arise when we use satellite image analysis. Satellite data changes the game because it allows us to gather new information that is not readily available to businesses. Satellite images allow you to view Earth from a broader perspective. You can point to any location on Earth and get the latest satellite images of that area. Also, this information is easy to access.


Waymo tests AI driving system that learns from labeled data

#artificialintelligence

Alphabet's self-driving spinoff Waymo achieved some noteworthy milestones this year, in August surpassing 10 million real-world miles with its driverless cars and last week launching Waymo One, a commercial driverless taxi service. But its researchers have their eyes fixed on the future. In a blog post published today on Medium, researchers Mayank Bansal and Abhijit Ogale detailed an approach to AI driver training that taps labeled data -- that is to say, Waymo's millions of annotated miles from expert driving demonstrations -- in a supervised manner. "In recent years, the supervised training of deep neural networks using large amounts of labeled data has rapidly improved the state-of-the-art in many fields, particularly in the area of object perception and prediction, and these technologies are used extensively at Waymo," the researchers wrote. "Following the success of neural networks for perception, we naturally asked ourselves the question: … can we train a skilled driver using a purely supervised deep learning approach?"


Why artificial intelligence is likely to take more lives

#artificialintelligence

Artificial neurons for deeply intelligent machines – this is the new artificial intelligence (AI) revolution, led by Geoffrey Hinton and his team since 2012. That year, Hinton, an expert in cognitive science at the University of Toronto and a researcher at Google Brain, demonstrated the striking effectiveness of a deep neural network (DNN) in an image-categorisation task. In the wake of these remarkable results, universities – and international corporations – invested massively in the promising and fascinating field of AI. Yet despite the impressive performance of DNNs in a variety of fields (visual and vocal recognition, translation, medical imagery, etc.), questions remain regarding the limits of deep learning for other uses, such as antonymous vehicles. To understand the limits of AI in its current state, we need to understand where DNNs come from and, above all, which areas of the human brain they are modelled on – little is known about this in industrial engineering, and even in some research centres.


New Intel Device Promotes AI Algorithms, Computer Vision at Network Edge

#artificialintelligence

Chipmaker Intel has unveiled a new version of its neural networks plug-in device aimed at helping developers reach further into the domain of artificial intelligence learning and edge computing. The Neural Compute Stick 2 is effectively a USB stick containing the Movidius Myriad X Vision Processing Unit. The new unit is, in essence, a chip that performs eight times faster than previous stick versions, says Intel, and is designed to carry out accelerated computations related to computer vision and image recognition on network edge devices. Intel said the stick could be used to "prototype and deploy deep neural network applications smarter and more efficiently with a tiny, fanless, deep learning development kit designed to enable a new generation of intelligent devices." Developers insert the NCS 2 into a compatible USB 3.0 port on their computers and configure it with AI and computer vision know-how before slotting it into a smart device and testing its capabilities.