Embedded Vision Processor IP prepares for AI-intensive edge applications -- Softei.com

#artificialintelligence 

Integrating a deep neural network accelerator, vector digital signal processor (DSP) and vector floating point unit (FPU), Synopsys explains that the DesignWare EV7x Vision Processors' heterogeneous architecture delivers 35 Tera operations per second (TOPS) for artificial intelligence system on chips (AI SoCs). The DesignWare ARC EV7x Embedded Vision processors, with deep neural network (DNN) accelerator provide sufficient performance for AI-intensive edge applications. The ARC EV7x Vision Processors integrate up to four enhanced vector processing units (VPUs) and a DNN accelerator with up to 14,080 MACs to deliver up to 35 TOPS performance in 16-nm FinFET process technologies under typical conditions, which is four times the performance of the ARC EV6x processors, reports Synopsys. Each EV7x VPU includes a 32-bit scalar unit and a 512-bit-wide vector DSP and can be configured for 8-, 16-, or 32-bit operations to perform simultaneous multiply-accumulates on different streams of data. The optional DNN accelerator scales from 880 to 14,080 MACs and employs a specialized architecture for faster memory access, higher performance, and better power efficiency than alternative neural network IP.