Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world's top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know. The companies and products in the list are organized into five categories as shown in the following table. Intel purchased Nervana Systems who was developing both a GPU/software approach in addition to their Nervana Engine ASIC. Intel is also planning in integrating into the Phi platform via a Knights Crest project.
The 2025 market for AI, including ADAS and robotic vehicles, is estimated at $2.75 billion – of which $2.5 billion will be "ADAS only"... Artificial Intelligence (AI) is gradually invading our lives through everyday objects like smartphones, smart speakers, and surveillance cameras. The hype around AI has led some players to consider it as a secondary objective, more or less difficult to achieve, rather than as a central tool to achieve the real objective: autonomy. Who are the winners and losers in the race for autonomy? "AI is gradually invading our lives and this will be particularly true in the automotive world" asserts Yohann Tschudi, Technology & Market Analyst, Computing & Software at Yole Développement (Yole). "AI could be the central tool to achieve AD, in the meantime some players are afraid of overinflated hype and do not put AI at the center of their AD strategy".
When the artificial intelligence program AlphaGo defeated champion Go player Lee Sedol earlier this year, everyone praised its advanced software brain. But the program, developed by Google's DeepMind research team, also had some serious hardware brawn standing behind it. The program was running on custom accelerators that Google's hardware engineers had spent years building in secret, the company said. With the new accelerators plugged into the AlphaGo servers, the program could recognize patterns in its vast library of game data faster than it could with standard processors. The increased speed helped AlphaGo make the kind of quick, intuitive judgments that define how humans approach the game.
ARM Ltd. joined a growing roster of processor specialists zeroing in on artificial intelligence and machine learning applications with the introduction of two new processor cores, one emphasizing performance, and the other efficiency. The chip intellectual property vendor unveiled its high-end Cortex-A75 paired with its "high-efficiency" Cortex A-55 processors during this week's Computex 2017 event in Taipei, Taiwan. Along with greater efficiency and processing horsepower, the chipmaker is positioning its latest processors as filling the gap in cloud computing by boosting data processing and storage on connected devices. Along with accelerating AI development, ARM also is advancing its flexible processing approach that incorporates a so-called "big" and "LITTLE" processor core configuration into a single computing cluster. That architecture is based on the assumption that the highest CPU performance is required only about 10 percent of the time.
Today, Intel announced that its Xeon Phi processors are finally available to customers. This comes nearly a year after the company's originally-quoted launch date, and seven months after Intel announced that pre-production chips were already in use by select partners. The Xeon Phi processors feature double-precision performance in excess of 3 teraflops along with 8 teraflops of single-precision performance. All Xeon Phi processors incorporate 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and offers 500GB/s of sustained memory bandwidth. The MCDRAM can effectively be used as a high-speed cache or as a complimentary addition to the system DDR4 memory.