The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it's about to injects smarts into every other object imaginable. That's because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that's likely to lead to much smarter phones, drones, robots, cameras, wearables and more. "Consumers will be genuinely amazed at the capabilities of these devices," said Cormac Brick, vice president of machine learning for Movidius Ltd., a maker of vision processor chips in San Mateo, Calif.
Ng announced Tuesday that he raised money from venture capital firms New Enterprise Associates, Sequoia Capital and Greylock Partners as well as SoftBank Group Corp. Under Ng, Baidu released a voice-based operating system that users can talk to - much like Amazon's Alexa voice assistant or Apple's Siri - and also started working on self-driving cars and face recognition technology to open things like transit turnstiles when users approach. I think it's a more systematic, repeatable process than most people think," said Ng, who also taught artificial intelligence courses at Stanford University. The first company to receive money from the fund will be Landing.ai,
The biggest hardware and software arrival since the iPad in 2010 has been Amazon's Echo voice-controlled intelligent speaker, powered by its Alexa software assistant. But just because you're not seeing amazing new consumer tech products on Amazon, in the app stores, or at the Apple Store or Best Buy, that doesn't mean the tech revolution is stuck or stopped. They are: Artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables. Google has changed its entire corporate mission to be "AI first" and, with Google Home and Google Assistant, to perform tasks via voice commands and eventually hold real, unstructured conversations.
This week about 180,000 visitors flocked to the world's biggest technology exhibition, the Consumer Electronics Show in Las Vegas. And while all the usual gadgets made an appearance, from smart fridges to self-driving cars, there was one dominant theme: speech. With nearly half of people in the US using voice-activated digital assistants in their smartphones or tablets, and the ownership of standalone digital assistants, like Google Home and Amazon Echo, expected to double in 2018, every tech company now wants a slice of the pie. Alexa, Amazon's voice assistant, is now available in everything from microwaves to cars, and from TVs to mirrors. Google had more than 350 voice-controlled devices at the show, including speakers, cars, and a giant toy town complete with a railway.
Without a doubt, 2016 was an amazing year for Machine Learning (ML) and Artificial Intelligence (AI) awareness in the press. But most people probably can't name 3 applications for machine learning, other than self-driving cars and perhaps their voice activated assistant hiding in their phone. There's also a lot of confusion about where the Artificial Intelligence program actually exists. When you ask Siri to play a song or tell you what the weather will be like tomorrow, does "she" live in your phone or in the Apple cloud? And while you ponder those obscure question, many investors and technology recommenders are trying to determine whether,,, or will provide the best underlying hardware chips, for which application and why.