In my previous post on the recent Linley Processor Conference, I wrote about the ways that semiconductor companies are developing heterogeneous systems to reach higher levels of performance and efficiency than with traditional hardware. One of the areas where this is most urgently needed is vision processing, a challenge that got a lot of attention at this year's conference.
The growth of artificial intelligence (AI) demands that semiconductor companies re-architect their system on chip (SoC) designs to provide more scalable levels of performance, flexibility, efficiency, and integration. From the edge to data centers, AI applications require a rethink of memory structures, the numbers and types of heterogeneous processors and hardware accelerators, and careful consideration of how the dataflow is enabled and managed between the various high-performance IP blocks. This paper will define AI, describe its applications, the problems it presents, and how designers can address those problems through new and holistic approaches to SoC and network on chip (NoC) design. It also describes challenges implementing AI functionality in automotive SoCs with ISO 26262 functional safety requirements.
Integrating a deep neural network accelerator, vector digital signal processor (DSP) and vector floating point unit (FPU), Synopsys explains that the DesignWare EV7x Vision Processors' heterogeneous architecture delivers 35 Tera operations per second (TOPS) for artificial intelligence system on chips (AI SoCs). The DesignWare ARC EV7x Embedded Vision processors, with deep neural network (DNN) accelerator provide sufficient performance for AI-intensive edge applications. The ARC EV7x Vision Processors integrate up to four enhanced vector processing units (VPUs) and a DNN accelerator with up to 14,080 MACs to deliver up to 35 TOPS performance in 16-nm FinFET process technologies under typical conditions, which is four times the performance of the ARC EV6x processors, reports Synopsys. Each EV7x VPU includes a 32-bit scalar unit and a 512-bit-wide vector DSP and can be configured for 8-, 16-, or 32-bit operations to perform simultaneous multiply-accumulates on different streams of data. The optional DNN accelerator scales from 880 to 14,080 MACs and employs a specialized architecture for faster memory access, higher performance, and better power efficiency than alternative neural network IP.
New applications are driving demand for faster and more efficient vision processing. The Hot Chips conference, now in its 28th year, is known for announcements of "big iron" such as the Power and SPARC chips behind some of the world's fastest systems. But these days the demand for processing power is coming from new places. One of the big ones is vision processing, driven by the proliferation of cameras; new applications in cars, phones and all sorts of "things;" and the rapid progress in neural networks for object recognition. All of this takes a lot of horsepower, and at this week's conference, several companies talked about different ways to tackle it.
Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world's top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know. The companies and products in the list are organized into five categories as shown in the following table. Intel purchased Nervana Systems who was developing both a GPU/software approach in addition to their Nervana Engine ASIC. Intel is also planning in integrating into the Phi platform via a Knights Crest project.