Drive.ai is a Silicon Valley startup working on a kit to retrofit your ride If Drive.ai is a success, your first self-driving car might already be parked in the driveway. The Silicon Valley start-up, founded recently by a team of former Stanford University Artificial Intelligence Lab products, is working on a software kit that can be used to retrofit existing vehicles. "We started Drive.ai because we believe there's a real opportunity to make our roads, our commutes, and our families safer," the company announced in a statement on its blog, citing a statistic that more than one million people die each year worldwide in automobile accidents caused by human error. At its foundation, Drive.ai is looking to use deep learning -- which its founders consider the most effective form of artificial intelligence ever developed -- to key a breakthrough in a field that giant companies such as Google and General Motors have been trying to master for years. "Unlike other forms of AI, which involve programming many sets of rules, a deep learning algorithm learns more like a human brain.
UCLA researchers have developed a new laser-based technology to rapidly screen blood samples for the presence of cancer cells. The label-free system measures 16 different physical characteristics of each cell and analyzes the data to identify whether the cell is cancerous. Not having to introduce any labeling chemicals and being gentle on the cells, the technique leaves the cells alive and available for further inspection using other means. It relies on a photonic time stretch microscope and a computer that runs deep learning artificial intelligence algorithms. The microscope can take millions of images per second thanks to unusual optics that produce high quality shots even at this speed.
Serving more than a billion people a day, Facebook has its work cut out for it when providing customized news feeds. That is where the social network giant takes advantage of deep learning to serve up the most relevant news to its vast user base. Facebook is challenged with finding the best personalized content, Andrew Tulloch, Facebook software engineer, said at the company's recent @scale conference in Silicon Valley. "Over the past year, more and more, we've been applying deep learning techniques to a bunch of these underlying machine learning models that power what stories you see." Applying such concepts as neural networks, deep learning is used in production in event prediction, machine translation models, natural language understanding, and computer vision services. Event prediction, in particular, is one of the largest machine learning problems at Facebook, which must serve the top couple of stories out of thousands of possibilities for users, all in a few hundred milliseconds.
AlphaGo's uncanny success at the game of Go was taken by many as a death knell for the dominance of the human intellect, but Google researcher David Silver doesn't see it that way. Instead, he sees a world of potential benefits. As one of the lead architects behind Google DeepMind's AlphaGo system, which defeated South Korean Go champion Lee Se-dol 4 games to 1 in March, Silver believes the technology's next role should be to help advance human health. "We'd like to use these technologies to have a positive impact in the real world," he told an audience of A.I. researchers Tuesday at the International Joint Conference on Artificial Intelligence in New York. With more possible board combinations than there are atoms in the universe, Go has long been considered the ultimate challenge for A.I. researchers.
Anyone that might be concerned about computers taking over look away now, because they are a step closer to sounding just like humans. Researchers in the UK at Google's DeepMind unit have been working on making computer-generated speech sound as "natural" as humans. The technology, called WaveNet, which is focused on the area of speech synthesis, or text-to-speech, was found to sound more natural than any of Google's products. However, this was only achieved after the WaveNet artificial neural network was trained to produce English and Chinese speech which required copious amounts of computing power, so the technology probably won't be hitting the mainstream any time soon. Using a convolutional neural network, which is used for artificial intelligence in deep learning, it is trained on data and then the systems make inferences about new data, in addition to being used to generate new data.