The 2025 market for AI, including ADAS and robotic vehicles, is estimated at $2.75 billion – of which $2.5 billion will be "ADAS only"... Artificial Intelligence (AI) is gradually invading our lives through everyday objects like smartphones, smart speakers, and surveillance cameras. The hype around AI has led some players to consider it as a secondary objective, more or less difficult to achieve, rather than as a central tool to achieve the real objective: autonomy. Who are the winners and losers in the race for autonomy? "AI is gradually invading our lives and this will be particularly true in the automotive world" asserts Yohann Tschudi, Technology & Market Analyst, Computing & Software at Yole Développement (Yole). "AI could be the central tool to achieve AD, in the meantime some players are afraid of overinflated hype and do not put AI at the center of their AD strategy".
SAN MATEO, California – April 28th, 2016 – Movidius, the leader in low-power machine vision technology, today announced both the Fathom Neural Compute Stick – the world's first deep learning acceleration module, and Fathom deep learning software framework. Both tools hand-in-hand will allow powerful neural networks to be moved out of the cloud, and deployed natively in end-user devices. The new Fathom Neural Compute Stick is the world's first embedded neural network accelerator. With the company's ultra-low power, high performance Myriad 2 processor inside, the Fathom Neural Compute Stick can run fully-trained neural networks at under 1 Watt of power. Thanks to standard USB connectivity, the Fathom Neural Compute Stick can be connected to a range of devices and enhance their neural compute capabilities by orders of magnitude.
ARM Ltd. joined a growing roster of processor specialists zeroing in on artificial intelligence and machine learning applications with the introduction of two new processor cores, one emphasizing performance, and the other efficiency. The chip intellectual property vendor unveiled its high-end Cortex-A75 paired with its "high-efficiency" Cortex A-55 processors during this week's Computex 2017 event in Taipei, Taiwan. Along with greater efficiency and processing horsepower, the chipmaker is positioning its latest processors as filling the gap in cloud computing by boosting data processing and storage on connected devices. Along with accelerating AI development, ARM also is advancing its flexible processing approach that incorporates a so-called "big" and "LITTLE" processor core configuration into a single computing cluster. That architecture is based on the assumption that the highest CPU performance is required only about 10 percent of the time.
When AlphaGo, Google's artificial intelligence program, defeated champion Go player Lee Sedol earlier this year, everyone praised its advanced software brain. But the program, developed by Google's DeepMind research team, also had some serious hardware brawn standing behind it. The program was running on custom accelerators that Google's hardware engineers had spent years building in secret, the company said. With the new accelerators plugged into the AlphaGo servers, the program could recognize patterns in its vast library of game data faster than it could with standard processors. The increased speed helped AlphaGo make the kind of quick, intuitive judgments that define how humans approach the game.