Neuroscientists Transform Brain Activity to Speech with AI

#artificialintelligence

Artificial intelligence is enabling many scientific breakthroughs, especially in fields of study that generate high volumes of complex data such as neuroscience. As impossible as it may seem, neuroscientists are making strides in decoding neural activity into speech using artificial neural networks. Yesterday, the neuroscience team of Gopala K. Anumanchipalli, Josh Chartier, and Edward F. Chang of University of California San Francisco (UCSF) published in Nature their study using artificial intelligence and a state-of-the-art brain-machine interface to produce synthetic speech from brain recordings. The concept is relatively straightforward--record the brain activity and audio of participants while they are reading aloud in order to create a system that decodes brain signals for vocal tract movements, then synthesize speech from the decoded movements. The execution of the concept required sophisticated finessing of cutting-edge AI techniques and tools.


How Drive.ai Is Mastering Autonomous Driving With Deep Learning

#artificialintelligence

Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.


How Drive.ai Is Mastering Autonomous Driving with Deep Learning

#artificialintelligence

Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how they're using deep learning to master autonomous driving.


UCSF, NVIDIA join to research AI use in medical imaging

#artificialintelligence

UC San Francisco is upping its research into advanced computing in healthcare, launching an artificial intelligence center specifically to advance its use in medical imaging. The Center for Intelligent Imaging will develop and apply artificial intelligence in the quest to find new ways to use radiology to look inside the body and to evaluate health and disease. UCSF investigators in the center will work with Santa Clara, Calif-based NVIDIA, which develops AI products to support infrastructure and tools. The collaboration will aim to create new ways to enable the translation of AI into clinical practice. "Artificial intelligence represents the next frontier for diagnostic medicine," says Christopher Hess, MD, chair of UCSF's Department of Radiology and Biomedical Imaging.


Google releases TensorFlow 1.0 with new machine learning tools

#artificialintelligence

At Google's inaugural TensorFlow Dev Summit in Mountain View, California, today, Google announced the release of version 1.0 of its TensorFlow open source framework for deep learning, a trendy type of artificial intelligence. Google says the release is now production-ready by way of its application programing interface (API). But there are also new tools that will be part of the framework, which includes artificial neural networks that can be trained on data and can then make inferences about new data. Now there are more traditional machine learning tools, including K-means and support vector machines (SVMs), TensorFlow's engineering director, Rajat Monga, said at the conference. And there's an integration with the Python-based Keras library, which was originally meant to ease the use of the Theano deep learning framework.