How Drive.ai Is Mastering Autonomous Driving With Deep Learning

#artificialintelligence

Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.


How Drive.ai Is Mastering Autonomous Driving with Deep Learning

#artificialintelligence

Among all of the self-driving startups working towards Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how they're using deep learning to master autonomous driving.


Nvidia Beats Earnings Estimates As Its Artificial Intelligence Business Keeps On Booming

#artificialintelligence

Nvidia CEO Jen-Hsun Huang introducing the Nvidia Spot, a USD 49.95 microphone and speaker that will let owners use Google Assistant anywhere in a home, at the company's CES 2017 keynote (Photo by Ethan Miller/Getty Images) Nvidia continued to see demand for its graphics processors in the emerging world of artificial intelligence in its fourth quarter earnings reported Thursday. In its fourth quarter earnings release, the Santa Clara, Calif.-based company reported revenue of $2.17 billion, up 55% year over year, on earnings per share of $1.13, up 117% a year ago. Wall Street analysts estimated $2.11 billion in revenue on EPS of 83 cents. Traditionally, the company's processors have been mostly used to power the latest gaming graphics, but the chips have become popular to run AI software in the data center and autonomous vehicles. A specific branch of AI, called deep learning, is where Nvidia's processors particularly shine.


UCSF, NVIDIA join to research AI use in medical imaging

#artificialintelligence

UC San Francisco is upping its research into advanced computing in healthcare, launching an artificial intelligence center specifically to advance its use in medical imaging. The Center for Intelligent Imaging will develop and apply artificial intelligence in the quest to find new ways to use radiology to look inside the body and to evaluate health and disease. UCSF investigators in the center will work with Santa Clara, Calif-based NVIDIA, which develops AI products to support infrastructure and tools. The collaboration will aim to create new ways to enable the translation of AI into clinical practice. "Artificial intelligence represents the next frontier for diagnostic medicine," says Christopher Hess, MD, chair of UCSF's Department of Radiology and Biomedical Imaging.


Google releases TensorFlow 1.0 with new machine learning tools

#artificialintelligence

At Google's inaugural TensorFlow Dev Summit in Mountain View, California, today, Google announced the release of version 1.0 of its TensorFlow open source framework for deep learning, a trendy type of artificial intelligence. Google says the release is now production-ready by way of its application programing interface (API). But there are also new tools that will be part of the framework, which includes artificial neural networks that can be trained on data and can then make inferences about new data. Now there are more traditional machine learning tools, including K-means and support vector machines (SVMs), TensorFlow's engineering director, Rajat Monga, said at the conference. And there's an integration with the Python-based Keras library, which was originally meant to ease the use of the Theano deep learning framework.