New Synopsys Processor Core Targets Traditional- and Deep Learning-based Embedded Vision

#artificialintelligence

In early 2015, Synopsys' DesignWare EV5x processor core family achieved notable attention for its unique co-processor engine focused on CNNs (convolutional neural networks) for object recognition and other vision functions. The company's new EV6x processor core family includes an upgraded CNN engine that delivers substantial performance gains over its predecessor while – in a nod to customers preferring to leverage "classical" computer vision algorithms – decoupling it from the remainder of the core, which now includes 512-bit vector DSPs (Figure 1). Synopsys' new DesignWare EV6x family (top) comes in three variants and includes a 512-bit vector DSP engine, while making the CNN engine an option (bottom), in contrast with its EV5x predecessor. EV6x family members include a one- to four-core "Vision CPU," which finds use both for control functions and for image pre-processing operations such as greyscale conversion, according to Senior Product Marketing Manager Mike Thompson. The Vision CPU cores start with the same 32-bit scalar processor as in the prior generation EV5x, and add a new 512-bit vector DSP engine capable of 155 GOPS peak throughput at a clock speed of 800 MHz, significantly boosting the per-core performance (Figure 2).


Infineon Collaborate with Synopsys to Accelerate AI in Automotive Applications TimesTech

#artificialintelligence

Munich – 17 September 2019 – Artificial intelligence (AI) and neural networks are becoming a key factor in developing safer, smart and eco-friendly cars. In order to support AI-driven solutions with its future automotive microcontrollers, Infineon Technologies has started a collaboration with Synopsys, Inc. Next generation AURIX microcontrollers from Infineon will integrate a new high-performance AI accelerator called Parallel Processing Unit (PPU) that will employ Synopsys' DesignWare ARC EV Processor IP. AI and neural networks are fundamental building blocks for future automated driving applications such as object classification, target tracking, or path planning. Furthermore, they play an important role in optimizing many other automotive applications, helping to reduce the cost of ECU systems, improving their performance and accelerating time-to-market.


Vision is the next big challenge for chips

ZDNet

In my previous post on the recent Linley Processor Conference, I wrote about the ways that semiconductor companies are developing heterogeneous systems to reach higher levels of performance and efficiency than with traditional hardware. One of the areas where this is most urgently needed is vision processing, a challenge that got a lot of attention at this year's conference.


The push to process vehicle sensor data

#artificialintelligence

Continued from: "Advanced image sensors take automotive vision beyond 20/20." And there are many others now in the race to process all of that vehicle sensor data. Among them, Toshiba has been evolving its Visconti line of image recognition processors in parallel with increasingly demanding European New Car Assessment Programme (Euro NCAP) requirements. Starting in 2014, the Euro NCAP began rating vehicles based on active safety technologies such as lane departure warning (LDW), lane keep assist (LKA), and autonomous emergency braking (AEB). These requirements extended to daytime pedestrian AEB and speed assist systems (SAS) in 2016.


Machine Learning's Growing Divide

#artificialintelligence

Machine learning is one of the hottest areas of development, but most of the attention so far has focused on the cloud, algorithms and GPUs. For the semiconductor industry, the real opportunity is in optimizing and packaging solutions into usable forms, such as within the automotive industry or for battery-operated consumer or IoT products.