Neuroscientists Transform Brain Activity to Speech with AI

#artificialintelligence

Artificial intelligence is enabling many scientific breakthroughs, especially in fields of study that generate high volumes of complex data such as neuroscience. As impossible as it may seem, neuroscientists are making strides in decoding neural activity into speech using artificial neural networks. Yesterday, the neuroscience team of Gopala K. Anumanchipalli, Josh Chartier, and Edward F. Chang of University of California San Francisco (UCSF) published in Nature their study using artificial intelligence and a state-of-the-art brain-machine interface to produce synthetic speech from brain recordings. The concept is relatively straightforward--record the brain activity and audio of participants while they are reading aloud in order to create a system that decodes brain signals for vocal tract movements, then synthesize speech from the decoded movements. The execution of the concept required sophisticated finessing of cutting-edge AI techniques and tools.


How Drive.ai Is Mastering Autonomous Driving with Deep Learning

#artificialintelligence

Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how they're using deep learning to master autonomous driving.


How Drive.ai Is Mastering Autonomous Driving With Deep Learning

#artificialintelligence

Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.


Introduction to Machine Learning for Coders: Launch · fast.ai

#artificialintelligence

The course, recorded at the University of San Francisco as part of the Masters of Science in Data Science curriculum, covers the most important practical foundations for modern machine learning. There are 12 lessons, each of which is around two hours long--a list of all the lessons along with a screenshot from each is at the end of this post. There are some excellent machine learning courses already, most notably the wonderful Coursera course from Andrew Ng. But that course is showing its age now, particularly since it uses Matlab for coursework. This new course uses modern tools and libraries, including python, pandas, scikit-learn, and pytorch.


Google AI claims 99% accuracy in metastatic breast cancer detection

#artificialintelligence

A 2009 study of 102 breast cancer patients at two Boston health centers found that one in four were affected by the "process of care" failures such as inadequate physical examinations and incomplete diagnostic tests. That's one of the reasons that of the half a million deaths worldwide caused by breast cancer, an estimated 90 percent are the result of metastasis. But researchers at the Naval Medical Center San Diego and Google AI, a division within Google dedicated to artificial intelligence (AI) research, have developed a promising solution employing cancer-detecting algorithms that autonomously evaluate lymph node biopsies. Their AI system -- dubbed Lymph Node Assistant, or LYNA -- is described in a paper titled "Artificial Intelligence-Based Breast Cancer Nodal Metastasis Detection," published in The American Journal of Surgical Pathology. In tests, it achieved an area under the receiver operating characteristic (AUC) -- a measure of detection accuracy -- of 99 percent.